neuralop.models.GINO

class neuralop.models.GINO(*args, **kwargs)[source]

GINO: Geometry-informed Neural Operator. Learns a mapping between functions presented over arbitrary coordinate meshes. The model carries global integration through spectral convolution layers in an intermediate latent space, as described in [1]. Optionally enables a weighted output GNO for use in a Mollified Graph Neural Operator scheme, as introduced in [2].

Parameters:
in_channelsint

Feature dimension of input points. Determined by the problem.

out_channelsint

Feature dimension of output points. Determined by the problem.

fno_n_modestuple, optional

Number of modes along each dimension to use in FNO. Default: (16, 16, 16) Must be larger enough but smaller than max_resolution//2 (Nyquist frequency) on the latent grid

fno_hidden_channelsint, optional

Hidden channels for use in FNO. Default: 64

fno_n_layersint, optional

Number of layers in FNO. Default: 4

latent_feature_channelsint, optional

Number of channels in optional latent feature map to concatenate onto latent embeddings before the FNO’s forward pass. Default: None

projection_channel_ratioint, optional

Ratio of pointwise projection channels in the final ChannelMLP to fno_hidden_channels. The number of projection channels in the final ChannelMLP is computed by projection_channel_ratio * fno_hidden_channels (i.e. default 256). Default: 4

gno_coord_dimint, optional

Geometric dimension of input/output queries. Determined by the problem. Default: 3

in_gno_radiusfloat, optional

Radius in input space for GNO neighbor search. Default: 0.033 Larger radius means more neighboors so more global interactions, but larger computational cost.

out_gno_radiusfloat, optional

Radius in output space for GNO neighbor search. Default: 0.033 Larger radius means more neighboors so more global interactions, but larger computational cost.

gno_weighting_functionLiteral[“half_cos”, “bump”, “quartic”, “quadr”, “octic”], optional

Choice of weighting function to use in the output GNO for Mollified Graph Neural Operator-based models. See neuralop.layers.gno_weighting_functions for more details. Default: None

gno_weight_function_scalefloat, optional

Factor by which to scale weights from GNO weighting function. If gno_weighting_function is None, this is not used. Default: 1

in_gno_transform_typestr, optional

Transform type parameter for input GNO. Default: “linear” See neuralop.layers.gno_block for more details. See neuralop.layers.gno_block for more details.

out_gno_transform_typestr, optional

Transform type parameter for output GNO. Options: “linear”, “nonlinear”, “nonlinear_kernelonly”. Default: “linear” See neuralop.layers.gno_block for more details. Type of optional sinusoidal positional embedding to use in input GNOBlock. Default: “transformer” Type of optional sinusoidal positional embedding to use in input GNOBlock. Options: “transformer”, “nerf”. Default: “transformer” Type of optional sinusoidal positional embedding to use in output GNOBlock. Default: “transformer”

fno_in_channelsint, optional

Number of input channels for FNO. Default: 3

fno_lifting_channel_ratioint, optional

Ratio of lifting channels to fno_hidden_channels. The number of lifting channels in the lifting block of the FNO is fno_lifting_channel_ratio * hidden_channels (i.e. default 128). Default: 2

gno_embed_channelsint, optional

Dimension of optional per-channel embedding to use in GNOBlock. Default: 32

gno_embed_max_positionsint, optional

Max positions of optional per-channel embedding to use in GNOBlock. If gno_pos_embed_type != ‘transformer’, this is not used. Default: 10000

in_gno_channel_mlp_hidden_layerslist, optional

Widths of hidden layers in input GNO. Default: [80, 80, 80]

out_gno_channel_mlp_hidden_layerslist, optional

Widths of hidden layers in output GNO. Default: [512, 256]

gno_channel_mlp_non_linearitynn.Module, optional

Nonlinearity to use in GNO ChannelMLP. Default: F.gelu

gno_use_open3dbool, optional

Whether to use Open3D neighbor search. If False, uses pure-PyTorch fallback neighbor search. Default: True

gno_use_torch_scatterbool, optional

Whether to use torch-scatter to perform grouped reductions in the IntegralTransform. If False, uses native Python reduction in neuralop.layers.segment_csr. Default: True

Warning

torch-scatter is an optional dependency that conflicts with the newest versions of PyTorch, so you must handle the conflict explicitly in your environment. See Sparse computations with PyTorch-Scatter for more information.

out_gno_tanhbool, optional

Whether to use tanh to stabilize outputs of the output GNO. Default: False

fno_resolution_scaling_factorfloat, optional

Factor by which to scale output of FNO. Default: None

fno_block_precisionstr, optional

Data precision to compute within FNO block. Options: “full”, “half”, “mixed”. Default: “full”

fno_use_channel_mlpbool, optional

Whether to use a ChannelMLP layer after each FNO block. Default: True

fno_channel_mlp_dropoutfloat, optional

Dropout parameter of above ChannelMLP. Default: 0

fno_channel_mlp_expansionfloat, optional

Expansion parameter of above ChannelMLP. Default: 0.5

fno_non_linearitynn.Module, optional

Nonlinear activation function between each FNO layer. Default: F.gelu

fno_stabilizernn.Module, optional

By default None, otherwise tanh is used before FFT in the FNO block. Default: None

fno_normstr, optional

Normalization layer to use in FNO. Options: “ada_in”, “group_norm”, “instance_norm”, None. Default: None

fno_ada_in_featuresint, optional

If an adaptive mesh is used, number of channels of its positional embedding. If None, adaptive mesh embedding is not used. Default: 4

fno_ada_in_dimint, optional

Dimensions of above FNO adaptive mesh. Default: 1

fno_preactivationbool, optional

Whether to use ResNet-style preactivation. Default: False

fno_skipstr, optional

Type of skip connection to use. Options: “linear”, “identity”, “soft-gating”, None. Default: “linear”

fno_channel_mlp_skipstr, optional

Type of skip connection to use in the FNO. Options: “linear”, “identity”, “soft-gating”, None. Default: “soft-gating”

fno_separablebool, optional

Whether to use a separable spectral convolution. Default: False

fno_factorizationstr, optional

Tensor factorization of the parameters weight to use. Options: “tucker”, “tt”, “cp”, None. Default: None

fno_rankfloat, optional

Rank of the tensor factorization of the Fourier weights. Default: 1.0 Set to float <1.0 when using TFNO (i.e. when factorization is not None). A TFNO with rank 0.1 has roughly 10% of the parameters of a dense FNO.

fno_fixed_rank_modesbool, optional

Whether to not factorize certain modes. Default: False

fno_implementationstr, optional

If factorization is not None, forward mode to use. Options: “reconstructed”, “factorized”. Default: “factorized”

fno_decomposition_kwargsdict, optional

Additional parameters to pass to the tensor decomposition. Default: {}

fno_conv_modulenn.Module, optional

Spectral convolution module to use. Default: SpectralConv

Methods

forward(input_geom, latent_queries, ...[, ...])

Define the computation performed at every call.

latent_embedding(in_p[, ada_in])

References

[1]

: Li, Z., Kovachki, N., Choy, C., Li, B., Kossaifi, J., Otta, S., Nabian, M., Stadler, M., Hundt, C., Azizzadenesheli, K., Anandkumar, A. (2023) Geometry-Informed Neural Operator for Large-Scale 3D PDEs. NeurIPS 2023, https://proceedings.neurips.cc/paper_files/paper/2023/hash/70518ea42831f02afc3a2828993935ad-Abstract-Conference.html

[2]

: Lin, R. et al. Placeholder reference for Mollified Graph Neural Operators.

forward(input_geom, latent_queries, output_queries, x=None, latent_features=None, ada_in=None, **kwargs)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.