neuralop.layers.gno_block.GNOBlock

class neuralop.layers.gno_block.GNOBlock(in_channels: int, out_channels: int, coord_dim: int, radius: float, transform_type='linear', pos_embedding_type: str = 'transformer', pos_embedding_channels: int = 32, pos_embedding_max_positions: int = 10000, channel_mlp_layers: ~typing.List[int] = [128, 256, 128], channel_mlp_non_linearity=<built-in function gelu>, channel_mlp: ~torch.nn.modules.module.Module | None = None, use_open3d_neighbor_search: bool = True, use_torch_scatter_reduce: bool = True)[source]

GNOBlock implements a Graph Neural Operator layer as described in [1].

A GNO layer is a resolution-invariant operator that maps a function defined over one coordinate mesh to another defined over another coordinate mesh using a pointwise kernel integral that takes contributions from neighbors of distance 1 within a graph constructed via neighbor search with a specified radius.

Optionally, if provided, the input and output queries can have a positional embedding applied using the argument pos_embedding.

The kernel integral computed in IntegralTransform computes one of the following:

  1. int_{A(x)} k(x, y) dy

  2. int_{A(x)} k(x, y) * f(y) dy

  3. int_{A(x)} k(x, y, f(y)) dy

  4. int_{A(x)} k(x, y, f(y)) * f(y) dy

Parameters:
in_channelsint

number of channels in input function. Only used if transform_type is (c) “nonlinear” or (d) “nonlinear_kernelonly”

out_channelsint

number of channels in output function

coord_dimint

dimension of domain on which x and y are defined

radiusfloat

radius in which to search for neighbors

Methods

forward(y, x[, f_y, reduction])

Compute a GNO neighbor search and kernel integral transform.

Other Parameters:
transform_typestr, optional

Which integral transform to compute. The mapping is: ‘linear_kernelonly’ -> (a) ‘linear’ -> (b) [DEFAULT] ‘nonlinear_kernelonly’ -> (c) ‘nonlinear’ -> (d) If the input f is not given then (a) is computed by default independently of this parameter.

pos_embedding_type: literal {‘transformer’, ‘nerf’} | None

type of positional embedding to use during the kernel integral transform. see neuralop.layers.embeddings.SinusoidalEmbedding for more details. default ‘transformer’

pos_embedding_channelsint

per-channel dimension of optional positional embedding to use, by default 32

pos_embedding_max_positions: int

max_positions parameter for SinusoidalEmbedding of type ‘transformer’. If pos_embedding_type != ‘transformer’, this value is not used. Default 10000

channel_mlp_layersList[int], optional

list of layer widths to dynamically construct LinearChannelMLP network to parameterize kernel k, by default None

channel_mlp_non_linearitytorch.nn function, optional

activation function for ChannelMLPLinear above, by default F.gelu

channel_mlpnn.Module, optional

ChannelMLP parametrizing the kernel k. Input dimension should be dim x + dim y or dim x + dim y + dim f. ChannelMLP should not be pointwise and should only operate across channels to preserve the discretization-invariance of the kernel integral. If you have more specific needs than the LinearChannelMLP, this argument allows you to pass your own Module to parameterize the kernel k. Default None.

use_open3d_neighbor_search: bool, optional

whether to use open3d or native-PyTorch search

use_torch_scatter_reducebool, optional

whether to reduce in integral computation using a function provided by the extra dependency torch_scatter or the slower native PyTorch implementation, by default True

References

[1]

:

Zongyi Li, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya,

Anima Anandkumar (2020). “Neural Operator: Graph Kernel Network for Partial Differential Equations.” ArXiV, https://arxiv.org/pdf/2003.03485.

Examples

``` >>> gno = GNOBlock(in_channels=2, out_channels=12, coord_dim=3, radius=0.035)

>>> gno
GNOBlock(
    (pos_embedding): SinusoidalEmbedding()
    (neighbor_search): NeighborSearch()
    (channel_mlp): LinearChannelMLP(
        (fcs): ModuleList(
        (0): Linear(in_features=384, out_features=128, bias=True)
        (1): Linear(in_features=128, out_features=256, bias=True)
        (2): Linear(in_features=256, out_features=128, bias=True)
        (3): Linear(in_features=128, out_features=12, bias=True)
        )
    )
    (integral_transform): IntegralTransform(
        (channel_mlp): LinearChannelMLP(
        (fcs): ModuleList(
            (0): Linear(in_features=384, out_features=128, bias=True)
            (1): Linear(in_features=128, out_features=256, bias=True)
            (2): Linear(in_features=256, out_features=128, bias=True)
            (3): Linear(in_features=128, out_features=12, bias=True)
        )
        )
    )
)
```
forward(y, x, f_y=None, reduction='sum')[source]

Compute a GNO neighbor search and kernel integral transform.

Parameters:
ytorch.Tensor of shape [n, d1]

n points of dimension d1 specifying the space to integrate over. If batched, these must remain constant over the whole batch so no batch dim is needed.

xtorch.Tensor of shape [m, d1], default None

m points of dimension d1 over which the output function is defined. Must share domain with y

f_ytorch.Tensor of shape [batch, n, d2] or [n, d2], default None

Function to integrate the kernel against defined on the points y. The kernel is assumed diagonal hence its output shape must be d3 for the transforms (b) or (d). If None, (a) is computed.