neuralop.layers.rno_block.RNOBlock

class neuralop.layers.rno_block.RNOBlock(n_modes, hidden_channels, return_sequences=False, resolution_scaling_factor=None, max_n_modes=None, fno_block_precision='full', use_channel_mlp=True, channel_mlp_dropout=0, channel_mlp_expansion=0.5, non_linearity=<built-in function gelu>, stabilizer=None, norm=None, ada_in_features=None, preactivation=False, fno_skip='linear', channel_mlp_skip='soft-gating', complex_data=False, separable=False, factorization=None, rank=1.0, conv_module=<class 'neuralop.layers.spectral_convolution.SpectralConv'>, fixed_rank_modes=False, implementation='factorized', decomposition_kwargs={}, enforce_hermitian_symmetry=True)[source]

N-Dimensional Recurrent Neural Operator layer. The RNO layer extends the action of the RNO cell to take in some sequence of time-steps as input and output the next output function.

The layer applies the RNO cell recurrently over a sequence of inputs:
For t = 1 to T:

h_t = RNOCell(x_t, h_{t-1})

where the cell implements:

z_t = σ(f1(x_t) + f2(h_{t-1}) + b1) [update gate] r_t = σ(f3(x_t) + f4(h_{t-1}) + b2) [reset gate] h̃_t = selu(f5(x_t) + f6(r_t ⊙ h_{t-1}) + b3) [candidate state] h_t = (1 - z_t) ⊙ h_{t-1} + z_t ⊙ h̃_t [next state]

Parameters:
n_modesint tuple

number of modes to keep in Fourier Layer, along each dimension The dimensionality of the RNO is inferred from len(n_modes)

hidden_channelsint

number of hidden channels in the RNO

return_sequencesboolean, optional

Whether to return the sequence of hidden states associated with processing the inputs sequence of functions. Default: False

Methods

forward(x[, h])

Forward pass for RNO layer.

Other Parameters:
resolution_scaling_factorUnion[Number, List[Number]], optional

Factor by which to scale outputs for super-resolution, by default None

max_n_modesint or List[int], optional

Maximum number of modes to keep along each dimension, by default None

fno_block_precisionstr, optional

Floating point precision to use for computations. Options: “full”, “half”, “mixed”, by default “full”

use_channel_mlpbool, optional

Whether to use an MLP layer after each FNO block, by default True

channel_mlp_dropoutfloat, optional

Dropout parameter for self.channel_mlp, by default 0

channel_mlp_expansionfloat, optional

Expansion parameter for self.channel_mlp, by default 0.5

non_linearitytorch.nn.F module, optional

Nonlinear activation function to use between layers, by default F.gelu

stabilizerLiteral[“tanh”], optional

Stabilizing module to use between certain layers. Options: “tanh”, None, by default None

normLiteral[“ada_in”, “group_norm”, “instance_norm”, “batch_norm”], optional

Normalization layer to use. Options: “ada_in”, “group_norm”, “instance_norm”, “batch_norm”, None, by default None

ada_in_featuresint, optional

Number of features for adaptive instance norm above, by default None

preactivationbool, optional

Whether to call forward pass with pre-activation, by default False If True, call nonlinear activation and norm before Fourier convolution If False, call activation and norms after Fourier convolutions

fno_skipstr, optional

Module to use for FNO skip connections. Options: “linear”, “soft-gating”, “identity”, None, by default “linear” If None, no skip connection is added. See layers.skip_connections for more details

channel_mlp_skipstr, optional

Module to use for ChannelMLP skip connections. Options: “linear”, “soft-gating”, “identity”, None, by default “soft-gating” If None, no skip connection is added. See layers.skip_connections for more details

complex_databool, optional

Whether the FNO’s data takes on complex values in space, by default False

separablebool, optional

Separable parameter for SpectralConv, by default False

factorizationstr, optional

Factorization parameter for SpectralConv. Options: “tucker”, “cp”, “tt”, None, by default None

rankfloat, optional

Rank parameter for SpectralConv, by default 1.0

conv_moduleBaseConv, optional

Module to use for convolutions in FNO block, by default SpectralConv

joint_factorizationbool, optional

Whether to factorize all spectralConv weights as one tensor, by default False

fixed_rank_modesbool, optional

Fixed_rank_modes parameter for SpectralConv, by default False

implementationstr, optional

Implementation parameter for SpectralConv. Options: “factorized”, “reconstructed”, by default “factorized”

decomposition_kwargsdict, optional

Kwargs for tensor decomposition in SpectralConv, by default dict()

enforce_hermitian_symmetrybool, optional

Whether to enforce Hermitian symmetry conditions when performing inverse FFT for real-valued data. Only used when conv_module is SpectralConv or a subclass; ignored otherwise. When True, explicitly enforces that the 0th frequency and Nyquist frequency are real-valued before calling irfft. When False, relies on cuFFT’s irfftn to handle symmetry automatically, which may fail on certain GPUs or input sizes, causing line artifacts. By default True.

References

forward(x, h=None)[source]

Forward pass for RNO layer.

Parameters:
xtorch.Tensor

Input sequence with shape (batch, timesteps, hidden_channels, *spatial_dims)

htorch.Tensor, optional

Initial hidden state with shape (batch, hidden_channels, *spatial_dims_h). If None, initialized to zeros with added bias. Default: None

Returns:
torch.Tensor
If return_sequences=True: hidden states for all timesteps with shape

(batch, timesteps, hidden_channels, *spatial_dims_h)

If return_sequences=False: final hidden state with shape

(batch, hidden_channels, *spatial_dims_h)