neuralop.models.UNO

class neuralop.models.UNO(in_channels, out_channels, hidden_channels, lifting_channels=256, projection_channels=256, positional_embedding='grid', n_layers=4, uno_out_channels=None, uno_n_modes=None, uno_scalings=None, horizontal_skips_map=None, channel_mlp_dropout=0, channel_mlp_expansion=0.5, non_linearity=<built-in function gelu>, norm=None, preactivation=False, fno_skip='linear', horizontal_skip='linear', channel_mlp_skip='soft-gating', separable=False, factorization=None, rank=1.0, fixed_rank_modes=False, integral_operator=<class 'neuralop.layers.spectral_convolution.SpectralConv'>, operator_block=<class 'neuralop.layers.fno_block.FNOBlocks'>, implementation='factorized', decomposition_kwargs={}, domain_padding=None, verbose=False)[source]

U-Shaped Neural Operator

The architecture is described in [1].

Parameters:
in_channelsint

Number of input channels. Determined by the problem.

out_channelsint

Number of output channels. Determined by the problem.

hidden_channelsint

Initial width of the UNO. This significantly affects the number of parameters of the UNO. Good starting point can be 64, and then increased if more expressivity is needed. Update lifting_channels and projection_channels accordingly since they are proportional to hidden_channels.

uno_out_channelslist

Number of output channels of each Fourier layer. Example: For a five layer UNO uno_out_channels can be [32,64,64,64,32]

uno_n_modeslist

Number of Fourier modes to use in integral operation of each Fourier layer (along each dimension). Example: For a five layer UNO with 2D input the uno_n_modes can be: [[5,5],[5,5],[5,5],[5,5],[5,5]]

uno_scalingslist

Scaling factors for each Fourier layer. Example: For a five layer UNO with 2D input, the uno_scalings can be: [[1.0,1.0],[0.5,0.5],[1,1],[1,1],[2,2]]

n_layersint, optional

Number of Fourier layers. Default: 4

lifting_channelsint, optional

Number of hidden channels of the lifting block of the FNO. Default: 256

projection_channelsint, optional

Number of hidden channels of the projection block of the FNO. Default: 256

positional_embeddingUnion[str, GridEmbedding2D, GridEmbeddingND, None], optional

Positional embedding to apply to last channels of raw input before being passed through the UNO. Options: - “grid”: Appends a grid positional embedding with default settings to the last channels of raw input.

Assumes the inputs are discretized over a grid with entry [0,0,…] at the origin and side lengths of 1.

  • GridEmbedding2D: Uses this module directly for 2D cases.

  • GridEmbeddingND: Uses this module directly (see neuralop.embeddings.GridEmbeddingND for details).

  • None: Does nothing.

Default: “grid”

horizontal_skips_mapDict, optional

A dictionary {b: a, …} denoting horizontal skip connection from a-th layer to b-th layer. If None, default skip connection is applied. Example: For a 5 layer UNO architecture, the skip connections can be horizontal_skips_map = {4:0,3:1} Default: None

channel_mlp_dropoutfloat, optional

Dropout parameter for ChannelMLP after each FNO block. Default: 0

channel_mlp_expansionfloat, optional

Expansion parameter for ChannelMLP after each FNO block. Default: 0.5

non_linearitynn.Module, optional

Non-linearity module to use. Default: F.gelu

normstr, optional

Normalization layer to use. Options: “ada_in”, “group_norm”, “instance_norm”, None. Default: None

preactivationbool, optional

Whether to use ResNet-style preactivation. Default: False

fno_skipstr, optional

Type of skip connection to use in FNO layers. Options: “linear”, “identity”, “soft-gating”, None. Default: “linear”

horizontal_skipstr, optional

Type of skip connection to use in horizontal connections. Options: “linear”, “identity”, “soft-gating”, None. Default: “linear”

channel_mlp_skipstr, optional

Type of skip connection to use in channel-mixing MLP. Options: “linear”, “identity”, “soft-gating”, None. Default: “soft-gating”

separablebool, optional

Whether to use a separable spectral convolution. Default: False

factorizationstr, optional

Tensor factorization of the parameters weight to use. Options: “None”, “Tucker”, “CP”, “TT” Other factorization methods supported by tltorch. Default: None

rankfloat, optional

Rank of the tensor factorization of the Fourier weights. Default: 1.0. Set to float <1.0 when using TFNO (i.e. when factorization is not None). A TFNO with rank 0.1 has roughly 10% of the parameters of a dense FNO.

fixed_rank_modesbool, optional

Whether to not factorize certain modes. Default: False

implementationstr, optional

If factorization is not None, forward mode to use. Options: “reconstructed”, “factorized”. Default: “factorized”

decomposition_kwargsdict, optional

Additional parameters to pass to the tensor decomposition. Default: {}

domain_paddingUnion[float, List[float], None], optional

Percentage of padding to use. If not None, percentage of padding to use. Default: None

fft_normstr, optional

FFT normalization mode. Default: “forward”

Methods

forward(x, **kwargs)

Define the computation performed at every call.

References

[1]

:

Rahman, M.A., Ross, Z., Azizzadenesheli, K. “U-NO: U-shaped

Neural Operators” (2022). TMLR 2022, https://arxiv.org/pdf/2204.11127.

forward(x, **kwargs)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.