neuralop.models.SFNO
- class neuralop.models.SFNO(*args, **kwargs)
N-Dimensional Spherical Fourier Neural Operator. The SFNO learns a mapping between spaces of functions discretized over regular grids using Fourier convolutions, as described in [1].
The key component of an SFNO is its SpectralConv layer (see
neuralop.layers.spectral_convolution), which is similar to a standard CNN conv layer but operates in the frequency domain.For a deeper dive into the SFNO architecture, refer to Fourier Neural Operators.
- Parameters:
- n_modesTuple[int, …]
Number of modes to keep in Fourier Layer, along each dimension. The dimensionality of the SFNO is inferred from len(n_modes). n_modes must be larger enough but smaller than max_resolution//2 (Nyquist frequency)
- in_channelsint
Number of channels in input function. Determined by the problem.
- out_channelsint
Number of channels in output function. Determined by the problem.
- hidden_channelsint
Width of the SFNO (i.e. number of channels). This significantly affects the number of parameters of the SFNO. Good starting point can be 64, and then increased if more expressivity is needed. Update lifting_channel_ratio and projection_channel_ratio accordingly since they are proportional to hidden_channels.
- n_layersint, optional
Number of Fourier Layers. Default: 4
Methods
forward(x[, output_shape])FNO's forward pass
- Other Parameters:
- lifting_channel_ratioNumber, optional
Ratio of lifting channels to hidden_channels. The number of lifting channels in the lifting block of the SFNO is lifting_channel_ratio * hidden_channels (e.g. default 2 * hidden_channels).
- projection_channel_ratioNumber, optional
Ratio of projection channels to hidden_channels. The number of projection channels in the projection block of the SFNO is projection_channel_ratio * hidden_channels (e.g. default 2 * hidden_channels).
- positional_embeddingUnion[str, nn.Module], optional
Positional embedding to apply to last channels of raw input before being passed through the SFNO. Options: - “grid”: Appends a grid positional embedding with default settings to the last channels of raw input.
Assumes the inputs are discretized over a grid with entry [0,0,…] at the origin and side lengths of 1.
GridEmbeddingND: Uses this module directly (see
neuralop.embeddings.GridEmbeddingNDfor details).GridEmbedding2D: Uses this module directly for 2D cases.
None: Does nothing.
Default: “grid”
- non_linearitynn.Module, optional
Non-Linear activation function module to use. Default: F.gelu
- normLiteral[“ada_in”, “group_norm”, “instance_norm”], optional
Normalization layer to use. Options: “ada_in”, “group_norm”, “instance_norm”, None. Default: None
- complex_databool, optional
Whether the data is complex-valued. If True, initializes complex-valued modules. Default: False
- use_channel_mlpbool, optional
Whether to use an MLP layer after each SFNO block. Default: True
- channel_mlp_dropoutfloat, optional
Dropout parameter for ChannelMLP in SFNO Block. Default: 0
- channel_mlp_expansionfloat, optional
Expansion parameter for ChannelMLP in SFNO Block. Default: 0.5
- channel_mlp_skipLiteral[“linear”, “identity”, “soft-gating”, None], optional
Type of skip connection to use in channel-mixing mlp. Options: “linear”, “identity”, “soft-gating”, None. Default: “soft-gating”
- sfno_skipLiteral[“linear”, “identity”, “soft-gating”, None], optional
Type of skip connection to use in SFNO layers. Options: “linear”, “identity”, “soft-gating”, None. Default: “linear”
- resolution_scaling_factorUnion[Number, List[Number]], optional
Layer-wise factor by which to scale the domain resolution of function. Options: - None: No scaling - Single number n: Scales resolution by n at each layer - List of numbers [n_0, n_1,…]: Scales layer i’s resolution by n_i Default: None
- domain_paddingUnion[Number, List[Number]], optional
Percentage of padding to use. Options: - None: No padding - Single number: Percentage of padding to use along all dimensions - List of numbers [p1, p2, …, pN]: Percentage of padding along each dimension Default: None
- sfno_block_precisionstr, optional
Precision mode in which to perform spectral convolution. Options: “full”, “half”, “mixed”. Default: “full”. Default: “full”
- stabilizerstr, optional
Whether to use a stabilizer in SFNO block. Options: “tanh”, None. Default: None. stabilizer greatly improves performance in the case sfno_block_precision=’mixed’.
- max_n_modesTuple[int, …], optional
Maximum number of modes to use in Fourier domain during training. None means that all the n_modes are used. Tuple of integers: Incrementally increase the number of modes during training. This can be updated dynamically during training.
- factorizationstr, optional
Tensor factorization of the SFNO layer weights to use. Options: “None”, “Tucker”, “CP”, “TT” Other factorization methods supported by tltorch. Default: None
- rankfloat, optional
Tensor rank to use in factorization. Default: 1.0 Set to float <1.0 when using TSFNO (i.e. when factorization is not None). A TSFNO with rank 0.1 has roughly 10% of the parameters of a dense SFNO.
- fixed_rank_modesbool, optional
Whether to not factorize certain modes. Default: False
- implementationstr, optional
Implementation method for factorized tensors. Options: “factorized”, “reconstructed”. Default: “factorized”
- decomposition_kwargsdict, optional
Extra kwargs for tensor decomposition (see tltorch.FactorizedTensor). Default: {}
- separablebool, optional
Whether to use a separable spectral convolution. Default: False
- preactivationbool, optional
Whether to compute SFNO forward pass with resnet-style preactivation. Default: False
- conv_modulenn.Module, optional
Module to use for SFNOBlock’s convolutions. Default: SpectralConv
- enforce_hermitian_symmetrybool, optional
Whether to enforce Hermitian symmetry conditions when performing inverse FFT for real-valued data. Only used when
conv_moduleisSpectralConvor a subclass; ignored otherwise. When True, explicitly enforces that the 0th frequency and Nyquist frequency are real-valued before calling irfft. When False, relies on cuFFT’s irfftn to handle symmetry automatically, which may fail on certain GPUs or input sizes, causing line artifacts. By default True.
References
[1]:
- Li, Z. et al. “Fourier Neural Operator for Parametric Partial Differential
Equations” (2021). ICLR 2021, https://arxiv.org/pdf/2010.08895.
Examples
>>> from neuralop.models import SFNO >>> model = SFNO(n_modes=(12,12), in_channels=1, out_channels=1, hidden_channels=64) >>> model SFNO( (positional_embedding): GridEmbeddingND() (sfno_blocks): SFNOBlocks( (convs): SpectralConv( (weight): ModuleList( (0-3): 4 x DenseTensor(shape=torch.Size([64, 64, 12, 7]), rank=None) ) ) ... torch.nn.Module printout truncated ...
- forward(x, output_shape=None, **kwargs)
FNO’s forward pass
Applies optional positional encoding
Sends inputs through a lifting layer to a high-dimensional latent space
Applies optional domain padding to high-dimensional intermediate function representation
Applies n_layers Fourier/FNO layers in sequence (SpectralConvolution + skip connections, nonlinearity)
If domain padding was applied, domain padding is removed
Projection of intermediate function representation to the output channels
- Parameters:
- xtensor
input tensor
- output_shape{tuple, tuple list, None}, default is None
Gives the option of specifying the exact output shape for odd shaped inputs.
If None, don’t specify an output shape
If tuple, specifies the output-shape of the last FNO Block
If tuple list, specifies the exact output-shape of each FNO Block