neuralop.layers.fno_block
.FNOBlocks
- class neuralop.layers.fno_block.FNOBlocks(in_channels, out_channels, n_modes, resolution_scaling_factor=None, n_layers=1, max_n_modes=None, fno_block_precision='full', use_channel_mlp=True, channel_mlp_dropout=0, channel_mlp_expansion=0.5, non_linearity=<built-in function gelu>, stabilizer=None, norm=None, ada_in_features=None, preactivation=False, fno_skip='linear', channel_mlp_skip='soft-gating', complex_data=False, separable=False, factorization=None, rank=1.0, conv_module=<class 'neuralop.layers.spectral_convolution.SpectralConv'>, fixed_rank_modes=False, implementation='factorized', decomposition_kwargs={})[source]
FNOBlocks implements a sequence of Fourier layers.
The Fourier layers are first described in [1], and the exact implementation details of the Fourier layer architecture are discussed in [2].
- Parameters:
- in_channelsint
Number of input channels to Fourier layers
- out_channelsint
Number of output channels after Fourier layers
- n_modesint or List[int]
Number of modes to keep along each dimension in frequency space. Can either be specified as an int (for all dimensions) or an iterable with one number per dimension
- resolution_scaling_factorOptional[Union[Number, List[Number]]], optional
Factor by which to scale outputs for super-resolution, by default None
- n_layersint, optional
Number of Fourier layers to apply in sequence, by default 1
- max_n_modesint or List[int], optional
Maximum number of modes to keep along each dimension, by default None
- fno_block_precisionstr, optional
Floating point precision to use for computations. Options: “full”, “half”, “mixed”, by default “full”
- use_channel_mlpbool, optional
Whether to use an MLP layer after each FNO block, by default True
- channel_mlp_dropoutfloat, optional
Dropout parameter for self.channel_mlp, by default 0
- channel_mlp_expansionfloat, optional
Expansion parameter for self.channel_mlp, by default 0.5
- non_linearitytorch.nn.F module, optional
Nonlinear activation function to use between layers, by default F.gelu
- stabilizerLiteral[“tanh”], optional
Stabilizing module to use between certain layers. Options: “tanh”, None, by default None
- normLiteral[“ada_in”, “group_norm”, “instance_norm”, “batch_norm”], optional
Normalization layer to use. Options: “ada_in”, “group_norm”, “instance_norm”, “batch_norm”, None, by default None
- ada_in_featuresint, optional
Number of features for adaptive instance norm above, by default None
- preactivationbool, optional
Whether to call forward pass with pre-activation, by default False If True, call nonlinear activation and norm before Fourier convolution If False, call activation and norms after Fourier convolutions
- fno_skipstr, optional
Module to use for FNO skip connections. Options: “linear”, “soft-gating”, “identity”, None, by default “linear” If None, no skip connection is added. See layers.skip_connections for more details
- channel_mlp_skipstr, optional
Module to use for ChannelMLP skip connections. Options: “linear”, “soft-gating”, “identity”, None, by default “soft-gating” If None, no skip connection is added. See layers.skip_connections for more details
- Attributes:
- n_modes
Methods
forward
(x[, index, output_shape])Define the computation performed at every call.
get_block
(indices)Returns a sub-FNO Block layer from the jointly parametrized main block
set_ada_in_embeddings
(*embeddings)Sets the embeddings of each Ada-IN norm layers
forward_with_postactivation
forward_with_preactivation
- Other Parameters:
- complex_databool, optional
Whether the FNO’s data takes on complex values in space, by default False
- separablebool, optional
Separable parameter for SpectralConv, by default False
- factorizationstr, optional
Factorization parameter for SpectralConv. Options: “tucker”, “cp”, “tt”, None, by default None
- rankfloat, optional
Rank parameter for SpectralConv, by default 1.0
- conv_moduleBaseConv, optional
Module to use for convolutions in FNO block, by default SpectralConv
- joint_factorizationbool, optional
Whether to factorize all spectralConv weights as one tensor, by default False
- fixed_rank_modesbool, optional
Fixed_rank_modes parameter for SpectralConv, by default False
- implementationstr, optional
Implementation parameter for SpectralConv. Options: “factorized”, “reconstructed”, by default “factorized”
- decomposition_kwargsdict, optional
Kwargs for tensor decomposition in SpectralConv, by default dict()
References
[1]Li, Z. et al. “Fourier Neural Operator for Parametric Partial Differential Equations” (2021). ICLR 2021, https://arxiv.org/pdf/2010.08895.
[2]Kossaifi, J., Kovachki, N., Azizzadenesheli, K., Anandkumar, A. “Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs” (2024). TMLR 2024, https://openreview.net/pdf?id=AWiDlO63bH.
- set_ada_in_embeddings(*embeddings)[source]
Sets the embeddings of each Ada-IN norm layers
- Parameters:
- embeddingstensor or list of tensor
if a single embedding is given, it will be used for each norm layer otherwise, each embedding will be used for the corresponding norm layer
- forward(x, index=0, output_shape=None)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_block(indices)[source]
Returns a sub-FNO Block layer from the jointly parametrized main block
The parametrization of an FNOBlock layer is shared with the main one.