neuralop.layers.local_fno_block
.LocalFNOBlocks
- class neuralop.layers.local_fno_block.LocalFNOBlocks(in_channels, out_channels, n_modes, default_in_shape, resolution_scaling_factor=None, n_layers=1, disco_layers=True, disco_kernel_shape=[2, 4], radius_cutoff=None, domain_length=[2, 2], disco_groups=1, disco_bias=True, diff_layers=True, conv_padding_mode='periodic', fin_diff_kernel_size=3, mix_derivatives=True, max_n_modes=None, fno_block_precision='full', use_channel_mlp=False, channel_mlp_dropout=0, channel_mlp_expansion=0.5, non_linearity=<built-in function gelu>, stabilizer=None, norm=None, ada_in_features=None, preactivation=False, fno_skip='linear', channel_mlp_skip='soft-gating', separable=False, factorization=None, rank=1.0, conv_module=<class 'neuralop.layers.spectral_convolution.SpectralConv'>, fixed_rank_modes=False, implementation='factorized', decomposition_kwargs={}, fft_norm='forward', **kwargs)[source]
LocalFNOBlocks implements a sequence of Fourier layers, the operations of which are first described in [1]. The exact implementation details of the Fourier layer architecture are discussed in [2]. The Fourier layers are placed in parallel with differential kernel layers and local integral layers, as described in [3].
- Parameters:
- in_channelsint
input channels to Fourier layers
- out_channelsint
output channels after Fourier layers
- n_modesint, List[int]
number of modes to keep along each dimension in frequency space. Can either be specified as an int (for all dimensions) or an iterable with one number per dimension
- default_in_shapeTuple[int]
Default input shape on spatiotemporal dimensions.
- resolution_scaling_factorOptional[Union[Number, List[Number]]], optional
factor by which to scale outputs for super-resolution, by default None
- n_layersint, optional
number of Fourier layers to apply in sequence, by default 1
- disco_layersbool or bool list, optional
Must be same length as n_layers, dictates whether to include a local integral kernel parallel connection at each layer. If a single bool, shared for all layers.
- disco_kernel_shape: Union[int, List[int]]
kernel shape for local integral. Expects either a single integer for isotropic kernels or two integers for anisotropic kernels
- domain_length: torch.Tensor, optional
extent/length of the physical domain. Assumes square domain [-1, 1]^2 by default
- disco_groups: int, optional
number of groups in the local integral convolution, by default 1
- disco_bias: bool, optional
whether to use a bias for the integral kernel, by default True
- radius_cutoff: float, optional
cutoff radius (with respect to domain_length) for the local integral kernel, by default None
- diff_layersbool or bool list, optional
Must be same length as n_layers, dictates whether to include a differential kernel parallel connection at each layer. If a single bool, shared for all layers.
- conv_padding_modestr in [‘periodic’, ‘circular’, ‘replicate’, ‘reflect’, ‘zeros’], optional
Padding mode for spatial convolution kernels.
- fin_diff_kernel_sizeodd int, optional
Conv kernel size for finite difference convolution.
- mix_derivativesbool, optional
Whether to mix derivatives across channels.
- max_n_modesint, List[int], optional
maximum number of modes to keep along each dimension, by default None
- fno_block_precisionstr, optional
floating point precision to use for computations, by default “full”
- channel_mlp_dropoutint, optional
dropout parameter for self.channel_mlp, by default 0
- channel_mlp_expansionfloat, optional
expansion parameter for self.channel_mlp, by default 0.5
- non_linearitytorch.nn.F module, optional
nonlinear activation function to use between layers, by default F.gelu
- stabilizerLiteral[“tanh”], optional
stabilizing module to use between certain layers, by default None if “tanh”, use tanh
- normLiteral[“ada_in”, “group_norm”, “instance_norm”], optional
Normalization layer to use, by default None
- ada_in_featuresint, optional
number of features for adaptive instance norm above, by default None
- preactivationbool, optional
whether to call forward pass with pre-activation, by default False if True, call nonlinear activation and norm before Fourier convolution if False, call activation and norms after Fourier convolutions
- fno_skipstr, optional
module to use for FNO skip connections, by default “linear” see layers.skip_connections for more details
- channel_mlp_skipstr, optional
module to use for ChannelMLP skip connections, by default “soft-gating” see layers.skip_connections for more details
- Attributes:
- n_modes
Methods
forward
(x[, index, output_shape])Define the computation performed at every call.
get_block
(indices)Returns a sub-FNO Block layer from the jointly parametrized main block
set_ada_in_embeddings
(*embeddings)Sets the embeddings of each Ada-IN norm layers
forward_with_postactivation
forward_with_preactivation
- Other Parameters:
- complex_databool, optional
whether the FNO’s data takes on complex values in space, by default False
- separablebool, optional
separable parameter for SpectralConv, by default False
- factorizationstr, optional
factorization parameter for SpectralConv, by default None
- rankfloat, optional
rank parameter for SpectralConv, by default 1.0
- conv_moduleBaseConv, optional
module to use for convolutions in FNO block, by default SpectralConv
- joint_factorizationbool, optional
whether to factorize all spectralConv weights as one tensor, by default False
- fixed_rank_modesbool, optional
fixed_rank_modes parameter for SpectralConv, by default False
- implementationstr, optional
implementation parameter for SpectralConv, by default “factorized”
- decomposition_kwargs_type_, optional
kwargs for tensor decomposition in SpectralConv, by default dict()
References
[1]Li, Z. et al. “Fourier Neural Operator for Parametric Partial Differential Equations” (2021). ICLR 2021, https://arxiv.org/pdf/2010.08895.
[2]Kossaifi, J., Kovachki, N., Azizzadenesheli, K., Anandkumar, A. “Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs” (2024). TMLR 2024, https://openreview.net/pdf?id=AWiDlO63bH.
[3]Liu-Schiaffini M., Berner J., Bonev B., Kurth T., Azizzadenesheli K., Anandkumar A.; “Neural Operators with Localized Integral and Differential Kernels” (2024). ICML 2024, https://arxiv.org/pdf/2402.16845.
- set_ada_in_embeddings(*embeddings)[source]
Sets the embeddings of each Ada-IN norm layers
- Parameters:
- embeddingstensor or list of tensor
if a single embedding is given, it will be used for each norm layer otherwise, each embedding will be used for the corresponding norm layer
- forward(x, index=0, output_shape=None)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_block(indices)[source]
Returns a sub-FNO Block layer from the jointly parametrized main block
The parametrization of an FNOBlock layer is shared with the main one.