neuralop.layers.integral_transform.IntegralTransform

class neuralop.layers.integral_transform.IntegralTransform(channel_mlp=None, channel_mlp_layers=None, channel_mlp_non_linearity=<built-in function gelu>, transform_type='linear', use_torch_scatter=True)[source]

Integral Kernel Transform (GNO) Computes one of the following:

  1. int_{A(x)} k(x, y) dy

  2. int_{A(x)} k(x, y) * f(y) dy

  3. int_{A(x)} k(x, y, f(y)) dy

  4. int_{A(x)} k(x, y, f(y)) * f(y) dy

x : Points for which the output is defined

y : Points for which the input is defined A(x) : A subset of all points y (depending on each x) over which to integrate

k : A kernel parametrized as a MLP (LinearChannelMLP)

f : Input function to integrate against given on the points y

If f is not given, a transform of type (a) is computed. Otherwise transforms (b), (c), or (d) are computed. The sets A(x) are specified as a graph in CRS format.

Parameters:
channel_mlptorch.nn.Module, default None

MLP parametrizing the kernel k. Input dimension should be dim x + dim y or dim x + dim y + dim f. MLP should not be pointwise and should only operate across channels to preserve the discretization-invariance of the kernel integral.

channel_mlp_layerslist, default None

List of layers sizes speficing a MLP which parametrizes the kernel k. The MLP will be instansiated by the LinearChannelMLP class

channel_mlp_non_linearitycallable, default torch.nn.functional.gelu

Non-linear function used to be used by the LinearChannelMLP class. Only used if channel_mlp_layers is given and channel_mlp is None

transform_typestr, default ‘linear’

Which integral transform to compute. The mapping is: ‘linear_kernelonly’ -> (a) ‘linear’ -> (b) ‘nonlinear_kernelonly’ -> (c) ‘nonlinear’ -> (d) If the input f is not given then (a) is computed by default independently of this parameter.

use_torch_scatterbool, default ‘True’

Whether to use torch_scatter’s implementation of segment_csr or our native PyTorch version. torch_scatter should be installed by default, but there are known versioning issues on some linux builds of CPU-only PyTorch. Try setting to False if you experience an error from torch_scatter.

Methods

forward(y, neighbors[, x, f_y, weights])

Compute a kernel integral transform

forward(y, neighbors, x=None, f_y=None, weights=None)[source]

Compute a kernel integral transform

Parameters:
ytorch.Tensor of shape [n, d1]

n points of dimension d1 specifying the space to integrate over. If batched, these must remain constant over the whole batch so no batch dim is needed.

neighborsdict

The sets A(x) given in CRS format. The dict must contain the keys “neighbors_index” and “neighbors_row_splits.” For descriptions of the two, see NeighborSearch. If batch > 1, the neighbors must be constant across the entire batch.

xtorch.Tensor of shape [m, d2], default None

m points of dimension d2 over which the output function is defined. If None, x = y.

f_ytorch.Tensor of shape [batch, n, d3] or [n, d3], default None

Function to integrate the kernel against defined on the points y. The kernel is assumed diagonal hence its output shape must be d3 for the transforms (b) or (d). If None, (a) is computed.

weightstorch.Tensor of shape [n,], default None

Weights for each point y proprtional to the volume around f(y) being integrated. For example, suppose d1=1 and let y_1 < y_2 < … < y_{n+1} be some points. Then, for a Riemann sum, the weights are y_{j+1} - y_j. If None, 1/|A(x)| is used.