neuralop.models
.GINO
- class neuralop.models.GINO(*args, **kwargs)[source]
GINO: Geometry-informed Neural Operator. Learns a mapping between functions presented over arbitrary coordinate meshes. The model carries global integration through spectral convolution layers in an intermediate latent space, as described in [1]_. Optionally enables a weighted output GNO for use in a Mollified Graph Neural Operator scheme, as introduced in [2]_.
- Parameters:
- in_channelsint
feature dimension of input points
- out_channelsint
feature dimension of output points
- latent_feature_channelsint, optional
number of channels in optional latent feature map to concatenate onto latent embeddings before the FNO’s forward pass, default None
- projection_channel_ratioint, optional
ratio of pointwise projection channels in the final
ChannelMLP
tofno_hidden_channels
, by default 4. The number of projection channels in the finalChannelMLP
is computed byprojection_channel_ratio * fno_hidden_channels
(i.e. default 256)- gno_coord_dimint, optional
geometric dimension of input/output queries, by default 3
- in_gno_radiusfloat, optional
radius in input space for GNO neighbor search, by default 0.033
- out_gno_radiusfloat, optional
radius in output space for GNO neighbor search, by default 0.033
- gno_weighting_functionLiteral{‘half_cos’, ‘bump’, ‘quartic’, ‘quadr’, ‘octic’}, optional
Choice of weighting function to use in the output GNO for Mollified Graph Neural Operator-based models. See
neuralop.layers.gno_weighting_functions
for more details.- gno_weight_function_scalefloat, optional
Factor by which to scale weights from GNO weighting function by default 1. If
gno_weighting_function
isNone
, this is not used.- in_gno_transform_typestr, optional
transform type parameter for input GNO, by default ‘linear’ see neuralop.layers.gno_block for more details
- out_gno_transform_typestr, optional
transform type parameter for output GNO, by default ‘linear’ see neuralop.layers.gno_block for more details
- in_gno_pos_embed_typeliteral {‘transformer’, ‘nerf’} | None
type of optional sinusoidal positional embedding to use in input GNOBlock, by default ‘transformer’
- out_gno_pos_embed_typeliteral {‘transformer’, ‘nerf’} | None
type of optional sinusoidal positional embedding to use in output GNOBlock, by default ‘transformer’
- fno_in_channelsint, optional
number of input channels for FNO, by default 3
- fno_n_modestuple, optional
number of modes along each dimension to use in FNO, by default (16, 16, 16)
- fno_hidden_channelsint, optional
hidden channels for use in FNO, by default 64
- fno_lifting_channel_ratioint, optional
ratio of lifting channels to
fno_hidden_channels
, by default 2 The number of liting channels in the lifting block of the FNO is fno_lifting_channel_ratio * hidden_channels (i.e. default 128)- fno_n_layersint, optional
number of layers in FNO, by default 4
Methods
forward
(input_geom, latent_queries, ...[, ...])The GINO's forward call: Input GNO --> FNOBlocks --> output GNO + projection to output queries.
latent_embedding
(in_p[, ada_in])- Other Parameters:
- gno_embed_channels: int
dimension of optional per-channel embedding to use in GNOBlock, by default 32
- gno_embed_max_positions: int
max positions of optional per-channel embedding to use in GNOBlock, by default 10000. If gno_pos_embed_type != ‘transformer’, value is unused.
- in_gno_channel_mlp_hidden_layerslist, optional
widths of hidden layers in input GNO, by default [80, 80, 80]
- out_gno_channel_mlp_hidden_layerslist, optional
widths of hidden layers in output GNO, by default [512, 256]
- gno_channel_mlp_non_linearitynn.Module, optional
nonlinearity to use in gno ChannelMLP, by default F.gelu
- gno_use_open3dbool, optional
whether to use open3d neighbor search, by default True if False, uses pure-PyTorch fallback neighbor search
- gno_use_torch_scatterbool, optional
whether to use
torch-scatter
to perform grouped reductions in theIntegralTransform
. If False, uses native Python reduction inneuralop.layers.segment_csr
, by default TrueWarning
torch-scatter
is an optional dependency that conflicts with the newest versions of PyTorch, so you must handle the conflict explicitly in your environment. See Sparse computations with PyTorch-Scatter for more information.- out_gno_tanhbool, optional
whether to use tanh to stabilize outputs of the output GNO, by default False
- fno_resolution_scaling_factorfloat | None, optional
factor by which to scale output of FNO, by default None
- fno_incremental_n_modeslist[int] | None, defaults to None
- if passed, sets n_modes separately for each FNO layer.
- fno_block_precisionstr, defaults to ‘full’
data precision to compute within fno block
- fno_use_channel_mlpbool, defaults to True
Whether to use a ChannelMLP layer after each FNO block.
- fno_channel_mlp_dropoutfloat, defaults to 0
dropout parameter of above ChannelMLP.
- fno_channel_mlp_expansionfloat, defaults to 0.5
expansion parameter of above ChannelMLP.
- fno_non_linearitynn.Module, defaults to F.gelu
nonlinear activation function between each FNO layer.
- fno_stabilizernn.Module | None, defaults to None
By default None, otherwise tanh is used before FFT in the FNO block.
- fno_normnn.Module | None, defaults to None
normalization layer to use in FNO.
- fno_ada_in_featuresint | None, defaults to 4
if an adaptive mesh is used, number of channels of its positional embedding. If None, adaptive mesh embedding is not used.
- fno_ada_in_dimint, defaults to 1
dimensions of above FNO adaptive mesh.
- fno_preactivationbool, defaults to False
whether to use Resnet-style preactivation.
- fno_skipstr, defaults to ‘linear’
type of skip connection to use.
- fno_channel_mlp_skipstr, defaults to ‘soft-gating’
type of skip connection to use in the FNO ‘linear’: conv layer ‘soft-gating’: weights the channels of the input ‘identity’: nn.Identity
- fno_separablebool, defaults to False
if True, use a depthwise separable spectral convolution.
- fno_factorizationstr {‘tucker’, ‘tt’, ‘cp’} | None, defaults to None
Tensor factorization of the parameters weight to use
- fno_rankfloat, defaults to 1.0
Rank of the tensor factorization of the Fourier weights.
- fno_joint_factorizationbool, defaults to False
Whether all the Fourier layers should be parameterized by a single tensor (vs one per layer).
- fno_fixed_rank_modesbool, defaults to False
Modes to not factorize.
- fno_implementationstr {‘factorized’, ‘reconstructed’} | None, defaults to ‘factorized’
If factorization is not None, forward mode to use:: * reconstructed : the full weight tensor is reconstructed from the factorization and used for the forward pass * factorized : the input is directly contracted with the factors of the decomposition
- fno_decomposition_kwargsdict, defaults to dict()
Optionaly additional parameters to pass to the tensor decomposition.
- fno_conv_modulenn.Module, defaults to SpectralConv
Spectral Convolution module to use.
References
- forward(input_geom, latent_queries, output_queries, x=None, latent_features=None, ada_in=None, **kwargs)[source]
The GINO’s forward call: Input GNO –> FNOBlocks –> output GNO + projection to output queries.
Note
GINO currently supports batching only in cases where the geometry of inputs and outputs is shared across the entire batch. Inputs can have a batch dim in
x
andlatent_features
, but it must be shared for both.- Parameters:
- input_geomtorch.Tensor
input domain coordinate mesh shape (1, n_in, gno_coord_dim)
- latent_queriestorch.Tensor
latent geometry on which to compute FNO latent embeddings a grid on [0,1] x [0,1] x …. shape (1, n_gridpts_1, …. n_gridpts_n, gno_coord_dim)
- output_queriestorch.Tensor | dict[torch.Tensor]
points at which to query the final GNO layer to get output.
shape (1, n_out, gno_coord_dim) per tensor.
if a tensor, the model will output a tensor.
if a dict of tensors, the model will return a dict of outputs, so
that
output[key]
corresponds to the model queried atoutput_queries[key]
.- xtorch.Tensor, optional
input function a defined on the input domain input_geom shape (batch, n_in, in_channels). Default None
- latent_featurestorch.Tensor, optional
optional feature map to concatenate onto latent embedding before being passed into the latent FNO, default None if latent_feature_channels is set, must be passed
- ada_intorch.Tensor, optional
adaptive scalar instance parameter, defaults to None
- Returns:
- outtorch.Tensor | dict[torch.Tensor]
Function over the output query coordinates * tensor if if
output_queries
is a tensor * dict if ifoutput_queries
is a dict