neuralop.models.LocalNO

class neuralop.models.LocalNO(*args, **kwargs)[source]

N-Dimensional Local Fourier Neural Operator. The LocalNO shares its forward pass and architecture with the standard FNO, with the key difference that its Fourier convolution layers are replaced with LocalNOBlocks that place differential kernel layers and local integral layers in parallel to its Fourier layers as detailed in [1].

Parameters:
n_modesTuple[int]

Number of modes to keep in Fourier Layer, along each dimension. The dimensionality of the Local NO is inferred from len(n_modes). No default value (required parameter).

in_channelsint

Number of channels in input function. Determined by the problem.

out_channelsint

Number of channels in output function. Determined by the problem.

hidden_channelsint

Width of the Local NO (i.e. number of channels). This significantly affects the number of parameters of the LocalNO. Good starting point can be 64, and then increased if more expressivity is needed. Update lifting_channel_ratio and projection_channel_ratio accordingly since they are proportional to hidden_channels.

default_in_shapeTuple[int]

Default input shape on spatiotemporal dimensions for structured DISCO convolutions. No default value (required parameter).

n_layersint, optional

Number of Local NO block layers. Default: 4

disco_layersUnion[bool, List[bool]], optional

Must be same length as n_layers, dictates whether to include a local integral kernel parallel connection at each layer. If a single bool, shared for all layers. Default: True

disco_kernel_shapeUnion[int, List[int]], optional

Kernel shape for local integral. Expects either a single integer for isotropic kernels or two integers for anisotropic kernels. Default: [2, 4]

domain_lengthList[int], optional

Extent/length of the physical domain. Assumes square domain [-1, 1]^2 by default. Default: [2, 2]

disco_groupsint, optional

Number of groups in the local integral convolution. Default: 1

disco_biasbool, optional

Whether to use a bias for the integral kernel. Default: True

radius_cutofffloat, optional

Cutoff radius (with respect to domain_length) for the local integral kernel. Default: None

diff_layersUnion[bool, List[bool]], optional

Must be same length as n_layers, dictates whether to include a differential kernel parallel connection at each layer. If a single bool, shared for all layers. Default: True

conv_padding_modestr, optional

Padding mode for spatial convolution kernels. Options: “periodic”, “circular”, “replicate”, “reflect”, “zeros”. Default: “periodic”

fin_diff_kernel_sizeint, optional

Conv kernel size for finite difference convolution. Default: 3

mix_derivativesbool, optional

Whether to mix derivatives across channels. Default: True

lifting_channel_ratioNumber, optional

Ratio of lifting channels to hidden_channels. The number of lifting channels in the lifting block of the Local NO is lifting_channel_ratio * hidden_channels (e.g. default 2 * hidden_channels). Default: 2

projection_channel_ratioNumber, optional

Ratio of projection channels to hidden_channels. The number of projection channels in the projection block of the Local NO is projection_channel_ratio * hidden_channels (e.g. default 2 * hidden_channels). Default: 2

positional_embeddingUnion[str, nn.Module], optional

Positional embedding to apply to last channels of raw input before being passed through the Local FNO.

Options: - “grid”: Appends a grid positional embedding with default settings to the last channels of raw input.

Assumes the inputs are discretized over a grid with entry [0,0,…] at the origin and side lengths of 1.

  • GridEmbedding2D: Uses this module directly for 2D cases.

  • GridEmbeddingND: Uses this module directly (see neuralop.embeddings.GridEmbeddingND for details).

  • None: Does nothing.

Default: “grid”

non_linearitynn.Module, optional

Non-linear activation function module to use. Default: F.gelu

normstr, optional

Normalization layer to use. Options: “ada_in”, “group_norm”, “instance_norm”, None. Default: None

complex_databool, optional

Whether data is complex-valued. If True, initializes complex-valued modules. Default: False

use_channel_mlpbool, optional

Whether to use an MLP layer after each LocalNO block. Default: False

channel_mlp_dropoutfloat, optional

Dropout parameter for ChannelMLP in LocalNO Block. Default: 0

channel_mlp_expansionfloat, optional

Expansion parameter for ChannelMLP in LocalNO Block. Default: 0.5

channel_mlp_skipstr, optional

Type of skip connection to use in channel-mixing MLP. Options: “linear”, “identity”, “soft-gating”, None. Default: “soft-gating”

local_no_skipstr, optional

Type of skip connection to use in LocalNO layers. Options: “linear”, “identity”, “soft-gating”, None. Default: “linear”

resolution_scaling_factorUnion[Number, List[Number]], optional

Layer-wise factor by which to scale the domain resolution of function. Options: - None: No scaling - Single number n: Scales resolution by n at each layer - List of numbers [n_0, n_1,…]: Scales layer i’s resolution by n_i Default: None

domain_paddingUnion[Number, List[Number]], optional

Percentage of padding to use. If not None, percentage of padding to use. To vary the percentage of padding used along each input dimension, pass in a list of percentages e.g. [p1, p2, …, pN] such that p1 corresponds to the percentage of padding along dim 1, etc. Default: None

local_no_block_precisionstr, optional

Precision mode in which to perform spectral convolution. Options: “full”, “half”, “mixed”. Default: “full”

stabilizerstr, optional

Whether to use a stabilizer in LocalNO block. Options: “tanh”, None. Default: None Note: stabilizer greatly improves performance in the case local_no_block_precision=’mixed’.

max_n_modesTuple[int], optional

Maximum number of modes to use in Fourier domain during training. None means that all the n_modes are used. Tuple of integers: Incrementally increase the number of modes during training. This can be updated dynamically during training. Default: None

factorizationstr, optional

Tensor factorization of the Local NO layer weights to use. Options: “None”, “Tucker”, “CP”, “TT” Other factorization methods supported by tltorch. Default: None

rankfloat, optional

Tensor rank to use in factorization. Default: 1.0 Set to float <1.0 when using TFNO (i.e. when factorization is not None). A TFNO with rank 0.1 has roughly 10% of the parameters of a dense FNO.

fixed_rank_modesbool, optional

Whether to not factorize certain modes. Default: False

implementationstr, optional

Implementation method for factorized tensors. Options: “factorized”, “reconstructed”. Default: “factorized”

decomposition_kwargsdict, optional

Extra kwargs for tensor decomposition (see tltorch.FactorizedTensor). Default: {}

separablebool, optional

Whether to use a separable spectral convolution. Default: False

preactivationbool, optional

Whether to compute LocalNO forward pass with ResNet-style preactivation. Default: False

conv_modulenn.Module, optional

Module to use for LocalNOBlock’s convolutions. Default: SpectralConv

Attributes:
n_modes

Methods

forward(x[, output_shape])

FNO's forward pass

References

[1]

Liu-Schiaffini M., Berner J., Bonev B., Kurth T., Azizzadenesheli K., Anandkumar A.; “Neural Operators with Localized Integral and Differential Kernels” (2024). ICML 2024, https://arxiv.org/pdf/2402.16845.

Examples

>>> from neuralop.models import LocalNO
>>> model = LocalNO(n_modes=(12,12), in_channels=1, out_channels=1, hidden_channels=64)
>>> model
FNO(
(positional_embedding): GridEmbeddingND()
(local_no_blocks): LocalNOBlocks(
    (convs): SpectralConv(
    (weight): ModuleList(
        (0-3): 4 x DenseTensor(shape=torch.Size([64, 64, 12, 7]), rank=None)
    )
    )
        ... torch.nn.Module printout truncated ...
forward(x, output_shape=None, **kwargs)[source]

FNO’s forward pass

  1. Applies optional positional encoding

  2. Sends inputs through a lifting layer to a high-dimensional latent

    space

  3. Applies optional domain padding to high-dimensional intermediate function representation

  4. Applies n_layers Local NO layers in sequence (Differential + optional DISCO + skip connections, nonlinearity)

  5. If domain padding was applied, domain padding is removed

  6. Projection of intermediate function representation to the output channels

Parameters:
xtensor

input tensor

output_shape{tuple, tuple list, None}, default is None

Gives the option of specifying the exact output shape for odd shaped inputs.

  • If None, don’t specify an output shape

  • If tuple, specifies the output-shape of the last FNO Block

  • If tuple list, specifies the exact output-shape of each FNO Block