neuralop.layers.spectral_convolution.SpectralConv

class neuralop.layers.spectral_convolution.SpectralConv(in_channels, out_channels, n_modes, complex_data=False, max_n_modes=None, bias=True, separable=False, resolution_scaling_factor: int | float | List[float | int] | None = None, fno_block_precision='full', rank=0.5, factorization=None, implementation='reconstructed', fixed_rank_modes=False, decomposition_kwargs: dict | None = None, init_std='auto', fft_norm='forward', device=None)[source]

SpectralConv implements the Spectral Convolution component of a Fourier layer described in [1] and [2].

Parameters:
in_channelsint

Number of input channels

out_channelsint

Number of output channels

n_modesint or int tuple

Number of modes to use for contraction in Fourier domain during training.

Warning

We take care of the redundancy in the Fourier modes, therefore, for an input of size I_1, …, I_N, please provide modes M_K that are I_1 < M_K <= I_N We will automatically keep the right amount of modes: specifically, for the last mode only, if you specify M_N modes we will use M_N // 2 + 1 modes as the real FFT is redundant along that last dimension. For more information on mode truncation, refer to Implementation

Note

Provided modes should be even integers. odd numbers will be rounded to the closest even number.

This can be updated dynamically during training.

max_n_modesint tuple or None, default is None
  • If not None, maximum number of modes to keep in Fourier Layer, along each dim

    The number of modes (n_modes) cannot be increased beyond that.

  • If None, all the n_modes are used.

separablebool, default is True

whether to use separable implementation of contraction if True, contracts factors of factorized tensor weight individually

init_stdfloat or ‘auto’, default is ‘auto’

std to use for the init

factorizationstr or None, {‘tucker’, ‘cp’, ‘tt’}, default is None

If None, a single dense weight is learned for the FNO. Otherwise, that weight, used for the contraction in the Fourier domain is learned in factorized form. In that case, factorization is the tensor factorization of the parameters weight used.

rankfloat or rank, optional

Rank of the tensor factorization of the Fourier weights, by default 1.0 Ignored if factorization is None

fixed_rank_modesbool, optional

Modes to not factorize, by default False Ignored if factorization is None

fft_normstr, optional

fft normalization parameter, by default ‘forward’

implementation{‘factorized’, ‘reconstructed’}, optional, default is ‘factorized’

If factorization is not None, forward mode to use:: * reconstructed : the full weight tensor is reconstructed from the

factorization and used for the forward pass

  • factorized : the input is directly contracted with the factors of the decomposition

Ignored if factorization is None

decomposition_kwargsdict, optional, default is {}

Optionaly additional parameters to pass to the tensor decomposition Ignored if factorization is None

complex_data: bool, optional

whether data takes on complex values in the spatial domain, by default False if True, uses different logic for FFT contraction and uses full FFT instead of real-valued

Attributes:
n_modes

Methods

forward(x[, output_shape])

Generic forward pass for the Factorized Spectral Conv

transform(x[, output_shape])

Transforms an input x for a skip connection, by default just an identity map

References

[1]

:

Li, Z. et al. “Fourier Neural Operator for Parametric Partial Differential

Equations” (2021). ICLR 2021, https://arxiv.org/pdf/2010.08895.

[2]

:

Kossaifi, J., Kovachki, N., Azizzadenesheli, K., Anandkumar, A. “Multi-Grid

Tensorized Fourier Neural Operator for High-Resolution PDEs” (2024). TMLR 2024, https://openreview.net/pdf?id=AWiDlO63bH.

transform(x, output_shape=None)[source]

Transforms an input x for a skip connection, by default just an identity map

If your function transforms the input then you should also implement this transform method so the skip connection can also work.

Typical usecases are:

  • Your upsample or downsample the input in the Spectral conv: the skip connection has to be similarly scaled. This allows you to deal with it however you want (e.g. avoid aliasing)

  • You perform a change of basis in your Spectral Conv, again, this needs to be applied to the skip connection too.

forward(x: Tensor, output_shape: Tuple[int] | None = None)[source]

Generic forward pass for the Factorized Spectral Conv

Parameters:
xtorch.Tensor

input activation of size (batch_size, channels, d1, …, dN)

Returns:
tensorized_spectral_conv(x)