neuralop.layers.integral_transform.IntegralTransform

class neuralop.layers.integral_transform.IntegralTransform(channel_mlp=None, channel_mlp_layers=None, channel_mlp_non_linearity=<built-in function gelu>, transform_type='linear', weighting_fn=None, reduction='sum', use_torch_scatter=True)[source]

Integral Kernel Transform (GNO) Computes one of the following:

  1. int_{A(x)} k(x, y) dy

  2. int_{A(x)} k(x, y) * f(y) dy

  3. int_{A(x)} k(x, y, f(y)) dy

  4. int_{A(x)} k(x, y, f(y)) * f(y) dy

x : Points for which the output is defined

y : Points for which the input is defined A(x) : A subset of all points y (depending on each x) over which to integrate

k : A kernel parametrized as a MLP (LinearChannelMLP)

f : Input function to integrate against given on the points y

If f is not given, a transform of type (a) is computed. Otherwise transforms (b), (c), or (d) are computed. The sets A(x) are specified as a graph in CRS format.

Parameters:
channel_mlptorch.nn.Module, default None

MLP parametrizing the kernel k. Input dimension should be dim x + dim y or dim x + dim y + dim f. MLP should not be pointwise and should only operate across channels to preserve the discretization-invariance of the kernel integral.

channel_mlp_layerslist, default None

List of layers sizes speficing a MLP which parametrizes the kernel k. The MLP will be instansiated by the LinearChannelMLP class

channel_mlp_non_linearitycallable, default torch.nn.functional.gelu

Non-linear function used to be used by the LinearChannelMLP class. Only used if channel_mlp_layers is given and channel_mlp is None

transform_typestr, default ‘linear’

Which integral transform to compute. The mapping is: ‘linear_kernelonly’ -> (a) ‘linear’ -> (b) ‘nonlinear_kernelonly’ -> (c) ‘nonlinear’ -> (d) If the input f is not given then (a) is computed by default independently of this parameter.

use_torch_scatterbool, default ‘True’

whether to use torch-scatter to perform grouped reductions in the IntegralTransform. If False, uses native Python reduction in neuralop.layers.segment_csr, by default True

Warning

torch-scatter is an optional dependency that conflicts with the newest versions of PyTorch, so you must handle the conflict explicitly in your environment. See Sparse computations with PyTorch-Scatter for more information.

Methods

forward(y, neighbors[, x, f_y, weights])

Compute a kernel integral transform.

forward(y, neighbors, x=None, f_y=None, weights=None)[source]

Compute a kernel integral transform. Assumes x=y if not specified.

Integral is taken w.r.t. the neighbors.

If no weights are given, a Monte-Carlo approximation is made.

Note

For transforms of type 0 or 2, out channels must be the same as the channels of f

Parameters:
ytorch.Tensor of shape [n, d1]

n points of dimension d1 specifying the space to integrate over. If batched, these must remain constant over the whole batch so no batch dim is needed.

neighborsdict

The sets A(x) given in CRS format. The dict must contain the keys “neighbors_index” and “neighbors_row_splits.” For descriptions of the two, see NeighborSearch. If batch > 1, the neighbors must be constant across the entire batch.

xtorch.Tensor of shape [m, d2], default None

m points of dimension d2 over which the output function is defined. If None, x = y.

f_ytorch.Tensor of shape [batch, n, d3] or [n, d3], default None

Function to integrate the kernel against defined on the points y. The kernel is assumed diagonal hence its output shape must be d3 for the transforms (b) or (d). If None, (a) is computed.

weightstorch.Tensor of shape [n,], default None

Weights for each point y proprtional to the volume around f(y) being integrated. For example, suppose d1=1 and let y_1 < y_2 < … < y_{n+1} be some points. Then, for a Riemann sum, the weights are y_{j+1} - y_j. If None, 1/|A(x)| is used.