neuralop.models.FNO

class neuralop.models.FNO(*args, **kwargs)[source]

N-Dimensional Fourier Neural Operator

Parameters:
n_modesint tuple

number of modes to keep in Fourier Layer, along each dimension The dimensionality of the TFNO is inferred from len(n_modes)

hidden_channelsint

width of the FNO (i.e. number of channels)

in_channelsint, optional

Number of input channels, by default 3

out_channelsint, optional

Number of output channels, by default 1

lifting_channelsint, optional

number of hidden channels of the lifting block of the FNO, by default 256

projection_channelsint, optional

number of hidden channels of the projection block of the FNO, by default 256

n_layersint, optional

Number of Fourier Layers, by default 4

max_n_modesNone or int tuple, default is None
  • If not None, this allows to incrementally increase the number of modes in Fourier domain during training. Has to verify n <= N for (n, m) in zip(max_n_modes, n_modes).

  • If None, all the n_modes are used.

This can be updated dynamically during training.

fno_block_precisionstr {‘full’, ‘half’, ‘mixed’}

if ‘full’, the FNO Block runs in full precision if ‘half’, the FFT, contraction, and inverse FFT run in half precision if ‘mixed’, the contraction and inverse FFT run in half precision

stabilizerstr {‘tanh’} or None, optional

By default None, otherwise tanh is used before FFT in the FNO block

use_mlpbool, optional

Whether to use an MLP layer after each FNO block, by default False

mlp_dropoutfloat , optional

droupout parameter of MLP layer, by default 0

mlp_expansionfloat, optional

expansion parameter of MLP layer, by default 0.5

non_linearitynn.Module, optional

Non-Linearity module to use, by default F.gelu

normF.module, optional

Normalization layer to use, by default None

preactivationbool, default is False

if True, use resnet-style preactivation

fno_skip{‘linear’, ‘identity’, ‘soft-gating’}, optional

Type of skip connection to use in fno, by default ‘linear’

mlp_skip{‘linear’, ‘identity’, ‘soft-gating’}, optional

Type of skip connection to use in mlp, by default ‘soft-gating’

separablebool, default is False

if True, use a depthwise separable spectral convolution

factorizationstr or None, {‘tucker’, ‘cp’, ‘tt’}

Tensor factorization of the parameters weight to use, by default None. * If None, a dense tensor parametrizes the Spectral convolutions * Otherwise, the specified tensor factorization is used.

joint_factorizationbool, optional

Whether all the Fourier Layers should be parametrized by a single tensor (vs one per layer), by default False

rankfloat or rank, optional

Rank of the tensor factorization of the Fourier weights, by default 1.0

fixed_rank_modesbool, optional

Modes to not factorize, by default False

implementation{‘factorized’, ‘reconstructed’}, optional, default is ‘factorized’

If factorization is not None, forward mode to use:: * reconstructed : the full weight tensor is reconstructed from the

factorization and used for the forward pass

  • factorized : the input is directly contracted with the factors of the decomposition

decomposition_kwargsdict, optional, default is {}

Optionaly additional parameters to pass to the tensor decomposition

domain_paddingNone, float, or List[float], optional

If not None, percentage of padding to use, by default None To vary the percentage of padding used along each input dimension, pass in a list of percentages e.g. [p1, p2, …, pN] such that p1 corresponds to the percentage of padding along dim 1, etc.

domain_padding_mode{‘symmetric’, ‘one-sided’}, optional

How to perform domain padding, by default ‘one-sided’

fft_normstr, optional

by default ‘forward’

Attributes:
n_modes

Methods

forward(x[, output_shape])

TFNO's forward pass

forward(x, output_shape=None, **kwargs)[source]

TFNO’s forward pass

Parameters:
xtensor

input tensor

output_shape{tuple, tuple list, None}, default is None

Gives the option of specifying the exact output shape for odd shaped inputs. * If None, don’t specify an output shape * If tuple, specifies the output-shape of the last FNO Block * If tuple list, specifies the exact output-shape of each FNO Block