neuralop.layers.local_no_block.LocalNOBlocks

class neuralop.layers.local_no_block.LocalNOBlocks(in_channels, out_channels, n_modes, default_in_shape, resolution_scaling_factor=None, n_layers=1, disco_layers=True, disco_kernel_shape=[2, 4], radius_cutoff=None, domain_length=[2, 2], disco_groups=1, disco_bias=True, diff_layers=True, conv_padding_mode='periodic', fin_diff_kernel_size=3, mix_derivatives=True, max_n_modes=None, local_no_block_precision='full', use_channel_mlp=False, channel_mlp_dropout=0, channel_mlp_expansion=0.5, non_linearity=<built-in function gelu>, stabilizer=None, norm=None, ada_in_features=None, preactivation=False, local_no_skip='linear', channel_mlp_skip='soft-gating', separable=False, factorization=None, rank=1.0, conv_module=<class 'neuralop.layers.spectral_convolution.SpectralConv'>, fixed_rank_modes=False, implementation='factorized', decomposition_kwargs={}, fft_norm='forward')[source]

Local Neural Operator blocks with localized integral and differential kernels.

It is implemented as described in [3].

This class implements neural operator blocks that combine Fourier neural operators with localized integral and differential kernels to capture both global and local features in PDE solutions [3]. The architecture addresses the over-smoothing limitations of purely global FNOs while maintaining resolution-independence through principled local operations.

The key innovation is the integration of two types of local operations: 1. Differential kernels: Learn finite difference stencils that converge to

differential operators under appropriate scaling.

  1. Local integral kernels: Use discrete-continuous convolutions with locally supported kernels to capture local interactions.

Parameters:
in_channelsint

Number of input channels to Fourier layers

out_channelsint

Number of output channels after Fourier layers

n_modesint, List[int]

Number of modes to keep along each dimension in frequency space. Can either be specified as an int (for all dimensions) or an iterable with one number per dimension

default_in_shapeTuple[int]

Default input shape for spatiotemporal dimensions

resolution_scaling_factorOptional[Union[Number, List[Number]]], optional

Factor by which to scale outputs for super-resolution, by default None

n_layersint, optional

Number of neural operator layers to apply in sequence, by default 1

disco_layersbool or List[bool], optional

Whether to include local integral kernel connections at each layer. If a single bool, applies to all layers. If a list, must match n_layers.

disco_kernel_shapeUnion[int, List[int]], optional

Kernel shape for local integral operations. Single int for isotropic kernels, two ints for anisotropic kernels, by default [2,4]

domain_lengthtorch.Tensor, optional

Physical domain extent/length. Assumes square domain [-1, 1]^2 by default

disco_groupsint, optional

Number of groups in local integral convolution, by default 1

disco_biasbool, optional

Whether to use bias for integral kernel, by default True

radius_cutofffloat, optional

Cutoff radius (relative to domain_length) for local integral kernel, by default None

diff_layersbool or List[bool], optional

Whether to include differential kernel connections at each layer. If a single bool, applies to all layers. If a list, must match n_layers.

conv_padding_modestr, optional

Padding mode for spatial convolution kernels. Options: ‘periodic’, ‘circular’, ‘replicate’, ‘reflect’, ‘zeros’. By default ‘periodic’

fin_diff_kernel_sizeint, optional

Kernel size for finite difference convolution (must be odd), by default 3

mix_derivativesbool, optional

Whether to mix derivatives across channels, by default True

max_n_modesint or List[int], optional

Maximum number of modes to keep along each dimension, by default None

local_no_block_precisionstr, optional

Floating point precision for computations, by default “full”

use_channel_mlpbool, optional

Whether to use MLP layer after each block, by default False

channel_mlp_dropoutint, optional

Dropout parameter for channel MLP, by default 0

channel_mlp_expansionfloat, optional

Expansion factor for channel MLP, by default 0.5

non_linearitytorch.nn.F module, optional

Nonlinear activation function between layers, by default F.gelu

stabilizerLiteral[“tanh”], optional

Stabilizing module between layers. Options: “tanh”. By default None

normLiteral[“ada_in”, “group_norm”, “instance_norm”], optional

Normalization layer to use, by default None

ada_in_featuresint, optional

Number of features for adaptive instance normalization, by default None

preactivationbool, optional

Whether to call forward pass with pre-activation, by default False if True, call nonlinear activation and norm before Fourier convolution if False, call activation and norms after Fourier convolutions

local_no_skipstr, optional

Module to use for Local NO skip connections, by default “linear” Options: “linear”, “identity”, “soft-gating”, None. If None, no skip connection is added. See layers.skip_connections for more details

channel_mlp_skipstr, optional

Module to use for ChannelMLP skip connections, by default “soft-gating” Options: “linear”, “identity”, “soft-gating”, None. If None, no skip connection is added. See layers.skip_connections for more details

Other Parameters
—————
complex_databool, optional

Whether the data takes complex values in space, by default False

separablebool, optional

Separable parameter for SpectralConv, by default False

factorizationstr, optional

Factorization method for SpectralConv, by default None Options: “factorized”, “reconstructed”.

rankfloat, optional

Rank parameter for SpectralConv, by default 1.0

conv_moduleBaseConv, optional

Convolution module for Local NO block, by default SpectralConv

joint_factorizationbool, optional

Whether to factorize all SpectralConv weights as one tensor, by default False

fixed_rank_modesbool, optional

Fixed rank modes parameter for SpectralConv, by default False

implementationstr, optional

Implementation method for SpectralConv, by default “factorized”

decomposition_kwargsdict, optional

Keyword arguments for tensor decomposition in SpectralConv, by default dict()

Attributes:
n_modes

Methods

forward(x[, index, output_shape])

Define the computation performed at every call.

get_block(indices)

Returns a sub-NO Block layer from the jointly parametrized main block

set_ada_in_embeddings(*embeddings)

Sets the embeddings of each Ada-IN norm layers

forward_with_postactivation

forward_with_preactivation

Notes

  • Differential kernels are only implemented for dimensions ≤ 3

  • Local integral kernels are only implemented for 2D domains

References

[1]

Li, Z. et al. “Fourier Neural Operator for Parametric Partial Differential Equations” (2021). ICLR 2021, https://arxiv.org/pdf/2010.08895.

[2]

Kossaifi, J., Kovachki, N., Azizzadenesheli, K., Anandkumar, A. “Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs” (2024). TMLR 2024, https://openreview.net/pdf?id=AWiDlO63bH.

[3] (1,2)

Liu-Schiaffini M., Berner J., Bonev B., Kurth T., Azizzadenesheli K., Anandkumar A.; “Neural Operators with Localized Integral and Differential Kernels” (2024). ICML 2024, https://arxiv.org/pdf/2402.16845.

set_ada_in_embeddings(*embeddings)[source]

Sets the embeddings of each Ada-IN norm layers

Parameters:
embeddingstensor or list of tensor

if a single embedding is given, it will be used for each norm layer otherwise, each embedding will be used for the corresponding norm layer

forward(x, index=0, output_shape=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_block(indices)[source]

Returns a sub-NO Block layer from the jointly parametrized main block

The parametrization of an LocalNOBlock layer is shared with the main one.