neuralop.models.UNO

class neuralop.models.UNO(in_channels, out_channels, hidden_channels, lifting_channels=256, projection_channels=256, n_layers=4, uno_out_channels=None, uno_n_modes=None, uno_scalings=None, horizontal_skips_map=None, incremental_n_modes=None, use_mlp=False, mlp_dropout=0, mlp_expansion=0.5, non_linearity=<built-in function gelu>, norm=None, preactivation=False, fno_skip='linear', horizontal_skip='linear', mlp_skip='soft-gating', separable=False, factorization=None, rank=1.0, joint_factorization=False, fixed_rank_modes=False, integral_operator=<class 'neuralop.layers.spectral_convolution.SpectralConv'>, operator_block=<class 'neuralop.layers.fno_block.FNOBlocks'>, implementation='factorized', decomposition_kwargs={}, domain_padding=None, domain_padding_mode='one-sided', fft_norm='forward', normalizer=None, verbose=False, **kwargs)[source]

U-Shaped Neural Operator [1]_

Parameters:
in_channelsint, optional

Number of input channels, by default 3

out_channelsint, optional

Number of output channels, by default 1

hidden_channelsint

initial width of the UNO (i.e. number of channels)

lifting_channelsint, optional

number of hidden channels of the lifting block of the FNO, by default 256

projection_channelsint, optional

number of hidden channels of the projection block of the FNO, by default 256

n_layersint, optional

Number of Fourier Layers, by default 4

uno_out_channels: list

Number of output channel of each Fourier Layers. Eaxmple: For a Five layer UNO uno_out_channels can be [32,64,64,64,32]

uno_n_modes: list

Number of Fourier Modes to use in integral operation of each Fourier Layers (along each dimension). Example: For a five layer UNO with 2D input the uno_n_modes can be: [[5,5],[5,5],[5,5],[5,5],[5,5]]

uno_scalings: list

Scaling Factors for each Fourier Layers Example: For a five layer UNO with 2D input, the uno_scalings can be : [[1.0,1.0],[0.5,0.5],[1,1],[1,1],[2,2]]

horizontal_skips_map: Dict, optional

a map {…., b: a, ….} denoting horizontal skip connection from a-th layer to b-th layer. If None default skip connection is applied. Example: For a 5 layer UNO architecture, the skip connections can be horizontal_skips_map ={4:0,3:1}

incremental_n_modesNone or int tuple, default is None
  • If not None, this allows to incrementally increase the number of modes in Fourier domain during training. Has to verify n <= N for (n, m) in zip(incremental_n_modes, n_modes).

  • If None, all the n_modes are used.

This can be updated dynamically during training.

use_mlpbool, optional

Whether to use an MLP layer after each FNO block, by default False

mlpdict, optional

Parameters of the MLP, by default None {‘expansion’: float, ‘dropout’: float}

non_linearitynn.Module, optional

Non-Linearity module to use, by default F.gelu

normF.module, optional

Normalization layer to use, by default None

preactivationbool, default is False

if True, use resnet-style preactivation

skip{‘linear’, ‘identity’, ‘soft-gating’}, optional

Type of skip connection to use, by default ‘soft-gating’

separablebool, default is False

if True, use a depthwise separable spectral convolution

factorizationstr or None, {‘tucker’, ‘cp’, ‘tt’}

Tensor factorization of the parameters weight to use, by default None. * If None, a dense tensor parametrizes the Spectral convolutions * Otherwise, the specified tensor factorization is used.

joint_factorizationbool, optional

Whether all the Fourier Layers should be parametrized by a single tensor (vs one per layer), by default False

rankfloat or rank, optional

Rank of the tensor factorization of the Fourier weights, by default 1.0

fixed_rank_modesbool, optional

Modes to not factorize, by default False

implementation{‘factorized’, ‘reconstructed’}, optional, default is ‘factorized’

If factorization is not None, forward mode to use:: * reconstructed : the full weight tensor is reconstructed from the factorization and used for the forward pass * factorized : the input is directly contracted with the factors of the decomposition

decomposition_kwargsdict, optional, default is {}

Optionaly additional parameters to pass to the tensor decomposition

domain_paddingNone or float, optional

If not None, percentage of padding to use, by default None

domain_padding_mode{‘symmetric’, ‘one-sided’}, optional

How to perform domain padding, by default ‘one-sided’

fft_normstr, optional

by default ‘forward’

[1]U-NO: U-shaped Neural Operators, Md Ashiqur Rahman, Zachary E Ross, Kamyar Azizzadenesheli, TMLR 2022

Methods

forward(x, **kwargs)

Define the computation performed at every call.

forward(x, **kwargs)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.