neuralop.models.FNOGNO

class neuralop.models.FNOGNO(*args, **kwargs)[source]

FNOGNO: Fourier/Geometry Neural Operator - maps from a regular N-d grid to an arbitrary query point cloud.

Parameters:
in_channelsint

Number of input channels. Determined by the problem.

out_channelsint

Number of output channels. Determined by the problem.

fno_n_modestuple, optional

Number of modes to keep along each spectral dimension of FNO block. Must be larger enough but smaller than max_resolution//2 (Nyquist frequency). Default: (16, 16, 16)

fno_hidden_channelsint, optional

Number of hidden channels of FNO block. Default: 64

fno_n_layersint, optional

Number of FNO layers in the block. Default: 4

projection_channel_ratioint, optional

Ratio of pointwise projection channels in the final ChannelMLP to fno_hidden_channels. The number of projection channels in the final ChannelMLP is computed by projection_channel_ratio * fno_hidden_channels (i.e. default 256). Default: 4

gno_coord_dimint, optional

Dimension of coordinate space where GNO is computed. Determined by the problem. Default: 3

gno_pos_embed_typeLiteral[“transformer”, “nerf”], optional

Type of optional sinusoidal positional embedding to use in GNOBlock. Default: “transformer”

gno_radiusfloat, optional

Radius parameter to construct graph. Default: 0.033 Larger radius means more neighboors so more global interactions, but larger computational cost.

gno_transform_typestr, optional

Type of kernel integral transform to apply in GNO. Kernel k(x,y): parameterized as ChannelMLP MLP integrated over a neighborhood of x.

Options: - “linear_kernelonly”: Integrand is k(x, y) - “linear”: Integrand is k(x, y) * f(y) - “nonlinear_kernelonly”: Integrand is k(x, y, f(y)) - “nonlinear”: Integrand is k(x, y, f(y)) * f(y) Default: “linear”

gno_weighting_functionLiteral[“half_cos”, “bump”, “quartic”, “quadr”, “octic”], optional

Choice of weighting function to use in the output GNO for Mollified Graph Neural Operator-based models. See neuralop.layers.gno_weighting_functions for more details. Default: None

gno_weight_function_scalefloat, optional

Factor by which to scale weights from GNO weighting function. If gno_weighting_function is None, this is not used. Default: 1

gno_embed_channelsint, optional

Dimension of optional per-channel embedding to use in GNOBlock. Default: 32

gno_embed_max_positionsint, optional

Max positions of optional per-channel embedding to use in GNOBlock. If gno_pos_embed_type != ‘transformer’, value is unused. Default: 10000

gno_channel_mlp_hidden_layerslist, optional

Dimension of hidden ChannelMLP layers of GNO. Default: [512, 256]

gno_channel_mlp_non_linearitynn.Module, optional

Nonlinear activation function between layers. Default: F.gelu

gno_use_open3dbool, optional

Whether to use Open3D functionality. If False, uses simple fallback neighbor search. Default: True

gno_use_torch_scatterbool, optional

Whether to use torch-scatter to perform grouped reductions in the IntegralTransform. If False, uses native Python reduction in neuralop.layers.segment_csr.

Warning

torch-scatter is an optional dependency that conflicts with the newest versions of PyTorch, so you must handle the conflict explicitly in your environment. See Sparse computations with PyTorch-Scatter for more information.

Default: True

gno_batchedbool, optional

Whether to use IntegralTransform/GNO layer in “batched” mode. If False, sets batched=False. Default: False

fno_lifting_channel_ratioint, optional

Ratio of lifting channels to FNO hidden channels. Default: 4

fno_resolution_scaling_factorfloat, optional

Factor by which to rescale output predictions in the original domain. Default: None

fno_block_precisionstr, optional

Data precision to compute within FNO block. Options: “full”, “half”, “mixed”. Default: “full”

fno_use_channel_mlpbool, optional

Whether to use a ChannelMLP layer after each FNO block. Default: True

fno_channel_mlp_dropoutfloat, optional

Dropout parameter of above ChannelMLP. Default: 0

fno_channel_mlp_expansionfloat, optional

Expansion parameter of above ChannelMLP. Default: 0.5

fno_non_linearitynn.Module, optional

Nonlinear activation function between each FNO layer. Default: F.gelu

fno_stabilizernn.Module, optional

By default None, otherwise tanh is used before FFT in the FNO block. Default: None

fno_normstr, optional

Normalization layer to use in FNO. Options: “ada_in”, “group_norm”, “instance_norm”, None. Default: None

fno_ada_in_featuresint, optional

If an adaptive mesh is used, number of channels of its positional embedding. Default: None

fno_ada_in_dimint, optional

Dimensions of above FNO adaptive mesh. Default: 1

fno_preactivationbool, optional

Whether to use ResNet-style preactivation. Default: False

fno_skipstr, optional

Type of skip connection to use. Options: “linear”, “identity”, “soft-gating”, None. Default: “linear”

fno_channel_mlp_skipstr, optional

Type of skip connection to use in the FNO.

Options: - “linear”: Conv layer - “soft-gating”: Weights the channels of the input - “identity”: nn.Identity - None: No skip connection

Default: “soft-gating”

fno_separablebool, optional

Whether to use a depthwise separable spectral convolution. Default: False

fno_factorizationstr, optional

Tensor factorization of the parameters weight to use. Options: “tucker”, “tt”, “cp”, None. Default: None

fno_rankfloat, optional

Rank of the tensor factorization of the Fourier weights. Default: 1.0

fno_fixed_rank_modesbool, optional

Whether to not factorize certain modes. Default: False

fno_implementationstr, optional

If factorization is not None, forward mode to use.

Options: - “reconstructed”: The full weight tensor is reconstructed from the factorization and used for the forward pass - “factorized”: The input is directly contracted with the factors of the decomposition

Default: “factorized”

fno_decomposition_kwargsdict, optional

Additional parameters to pass to the tensor decomposition. Default: {}

fno_conv_modulenn.Module, optional

Spectral convolution module to use. Default: SpectralConv

Methods

forward(in_p, out_p, f[, ada_in])

Define the computation performed at every call.

integrate_latent(in_p, out_p, latent_embed)

Compute integration region for each output point

latent_embedding(in_p, f[, ada_in])

integrate_latent(in_p, out_p, latent_embed)[source]

Compute integration region for each output point

forward(in_p, out_p, f, ada_in=None, **kwargs)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.