neuralop.models.FNO

class neuralop.models.FNO(*args, **kwargs)[source]

N-Dimensional Fourier Neural Operator. The FNO learns a mapping between spaces of functions discretized over regular grids using Fourier convolutions, as described in [1].

The key component of an FNO is its SpectralConv layer (see neuralop.layers.spectral_convolution), which is similar to a standard CNN conv layer but operates in the frequency domain.

For a deeper dive into the FNO architecture, refer to Fourier Neural Operators.

Parameters:
n_modesTuple[int, …]

Number of modes to keep in Fourier Layer, along each dimension. The dimensionality of the FNO is inferred from len(n_modes). n_modes must be larger enough but smaller than max_resolution//2 (Nyquist frequency)

in_channelsint

Number of channels in input function. Determined by the problem.

out_channelsint

Number of channels in output function. Determined by the problem.

hidden_channelsint

Width of the FNO (i.e. number of channels). This significantly affects the number of parameters of the FNO. Good starting point can be 64, and then increased if more expressivity is needed. Update lifting_channel_ratio and projection_channel_ratio accordingly since they are proportional to hidden_channels.

n_layersint, optional

Number of Fourier Layers. Default: 4

Attributes:
n_modes

Methods

forward(x[, output_shape])

FNO's forward pass

Other Parameters:
lifting_channel_ratioNumber, optional

Ratio of lifting channels to hidden_channels. The number of lifting channels in the lifting block of the FNO is lifting_channel_ratio * hidden_channels (e.g. default 2 * hidden_channels).

projection_channel_ratioNumber, optional

Ratio of projection channels to hidden_channels. The number of projection channels in the projection block of the FNO is projection_channel_ratio * hidden_channels (e.g. default 2 * hidden_channels).

positional_embeddingUnion[str, nn.Module], optional

Positional embedding to apply to last channels of raw input before being passed through the FNO. Options: - “grid”: Appends a grid positional embedding with default settings to the last channels of raw input.

Assumes the inputs are discretized over a grid with entry [0,0,…] at the origin and side lengths of 1.

  • GridEmbeddingND: Uses this module directly (see neuralop.embeddings.GridEmbeddingND for details).

  • GridEmbedding2D: Uses this module directly for 2D cases.

  • None: Does nothing.

Default: “grid”

non_linearitynn.Module, optional

Non-Linear activation function module to use. Default: F.gelu

normLiteral[“ada_in”, “group_norm”, “instance_norm”], optional

Normalization layer to use. Options: “ada_in”, “group_norm”, “instance_norm”, None. Default: None

complex_databool, optional

Whether the data is complex-valued. If True, initializes complex-valued modules. Default: False

use_channel_mlpbool, optional

Whether to use an MLP layer after each FNO block. Default: True

channel_mlp_dropoutfloat, optional

Dropout parameter for ChannelMLP in FNO Block. Default: 0

channel_mlp_expansionfloat, optional

Expansion parameter for ChannelMLP in FNO Block. Default: 0.5

channel_mlp_skipLiteral[“linear”, “identity”, “soft-gating”, None], optional

Type of skip connection to use in channel-mixing mlp. Options: “linear”, “identity”, “soft-gating”, None. Default: “soft-gating”

fno_skipLiteral[“linear”, “identity”, “soft-gating”, None], optional

Type of skip connection to use in FNO layers. Options: “linear”, “identity”, “soft-gating”, None. Default: “linear”

resolution_scaling_factorUnion[Number, List[Number]], optional

Layer-wise factor by which to scale the domain resolution of function. Options: - None: No scaling - Single number n: Scales resolution by n at each layer - List of numbers [n_0, n_1,…]: Scales layer i’s resolution by n_i Default: None

domain_paddingUnion[Number, List[Number]], optional

Percentage of padding to use. Options: - None: No padding - Single number: Percentage of padding to use along all dimensions - List of numbers [p1, p2, …, pN]: Percentage of padding along each dimension Default: None

fno_block_precisionstr, optional

Precision mode in which to perform spectral convolution. Options: “full”, “half”, “mixed”. Default: “full”. Default: “full”

stabilizerstr, optional

Whether to use a stabilizer in FNO block. Options: “tanh”, None. Default: None. stabilizer greatly improves performance in the case fno_block_precision=’mixed’.

max_n_modesTuple[int, …], optional

Maximum number of modes to use in Fourier domain during training. None means that all the n_modes are used. Tuple of integers: Incrementally increase the number of modes during training. This can be updated dynamically during training.

factorizationstr, optional

Tensor factorization of the FNO layer weights to use. Options: “None”, “Tucker”, “CP”, “TT” Other factorization methods supported by tltorch. Default: None

rankfloat, optional

Tensor rank to use in factorization. Default: 1.0 Set to float <1.0 when using TFNO (i.e. when factorization is not None). A TFNO with rank 0.1 has roughly 10% of the parameters of a dense FNO.

fixed_rank_modesbool, optional

Whether to not factorize certain modes. Default: False

implementationstr, optional

Implementation method for factorized tensors. Options: “factorized”, “reconstructed”. Default: “factorized”

decomposition_kwargsdict, optional

Extra kwargs for tensor decomposition (see tltorch.FactorizedTensor). Default: {}

separablebool, optional

Whether to use a separable spectral convolution. Default: False

preactivationbool, optional

Whether to compute FNO forward pass with resnet-style preactivation. Default: False

conv_modulenn.Module, optional

Module to use for FNOBlock’s convolutions. Default: SpectralConv

enforce_hermitian_symmetrybool, optional

Whether to enforce Hermitian symmetry conditions when performing inverse FFT for real-valued data. Only used when conv_module is SpectralConv or a subclass; ignored otherwise. When True, explicitly enforces that the 0th frequency and Nyquist frequency are real-valued before calling irfft. When False, relies on cuFFT’s irfftn to handle symmetry automatically, which may fail on certain GPUs or input sizes, causing line artifacts. By default True.

References

[1]

:

Li, Z. et al. “Fourier Neural Operator for Parametric Partial Differential

Equations” (2021). ICLR 2021, https://arxiv.org/pdf/2010.08895.

Examples

>>> from neuralop.models import FNO
>>> model = FNO(n_modes=(12,12), in_channels=1, out_channels=1, hidden_channels=64)
>>> model
FNO(
(positional_embedding): GridEmbeddingND()
(fno_blocks): FNOBlocks(
    (convs): SpectralConv(
    (weight): ModuleList(
        (0-3): 4 x DenseTensor(shape=torch.Size([64, 64, 12, 7]), rank=None)
    )
    )
        ... torch.nn.Module printout truncated ...
forward(x, output_shape=None, **kwargs)[source]

FNO’s forward pass

  1. Applies optional positional encoding

  2. Sends inputs through a lifting layer to a high-dimensional latent space

  3. Applies optional domain padding to high-dimensional intermediate function representation

  4. Applies n_layers Fourier/FNO layers in sequence (SpectralConvolution + skip connections, nonlinearity)

  5. If domain padding was applied, domain padding is removed

  6. Projection of intermediate function representation to the output channels

Parameters:
xtensor

input tensor

output_shape{tuple, tuple list, None}, default is None

Gives the option of specifying the exact output shape for odd shaped inputs.

  • If None, don’t specify an output shape

  • If tuple, specifies the output-shape of the last FNO Block

  • If tuple list, specifies the exact output-shape of each FNO Block