API reference

neuralop: Neural Operators in Python

Models

In neuralop.models, we provide neural operator models you can directly use on your applications.

FNO

We provide a general Fourier Neural Operator (TFNO) that supports most usecases.

We have a generic interface that works for any dimension, which is inferred based on n_modes (a tuple with the number of modes to keep in the Fourier domain for each dimension.)

FNO(*args, **kwargs)

N-Dimensional Fourier Neural Operator.

We also have dimension-specific classes:

FNO1d(*args, **kwargs)

1D Fourier Neural Operator

FNO2d(*args, **kwargs)

2D Fourier Neural Operator

FNO3d(*args, **kwargs)

3D Fourier Neural Operator


Tensorized FNO (TFNO)

N-D version: .. _tfno_api:

TFNO(*args, **kwargs)

N-Dimensional Fourier Neural Operator.

Dimension-specific classes:

TFNO1d(*args, **kwargs)

1D Fourier Neural Operator

TFNO2d(*args, **kwargs)

2D Fourier Neural Operator

TFNO3d(*args, **kwargs)

3D Fourier Neural Operator


Spherical Fourier Neural Operators (SFNO)

SFNO(*args, **kwargs)

N-Dimensional Spherical Fourier Neural Operator.


Geometry-Informed Neural Operators (GINO)

GINO(*args, **kwargs)

GINO: Geometry-informed Neural Operator.


Local Neural Operators (LocalNO)

LocalNO(*args, **kwargs)

N-Dimensional Local Fourier Neural Operator.


U-shaped Neural Operators (U-NO)

UNO(in_channels, out_channels, hidden_channels)

U-Shaped Neural Operator, as described in [Ra5b933fe5a53-1].


Layers

In addition to the full architectures, we also provide in neuralop.layers building blocks, in the form of PyTorch layers, that you can use to build your own models:

Fourier Convolutions

General SpectralConv layer:

SpectralConv(in_channels, out_channels, n_modes)

SpectralConv implements the Spectral Convolution component of a Fourier layer described in [R17685265b205-1] and [R17685265b205-2].

Dimension-specific versions:


Spherical Convolutions

SphericalConv(in_channels, out_channels, n_modes)

Spherical Convolution, base class for the SFNO [Radd7fd10dc7a-1]


To support geometry-informed (GINO) models, we also offer the ability to integrate kernels in the spatial domain,

which we formulate as mappings between arbitrary coordinate meshes.

Graph convolutions and kernel integration

GNOBlock(in_channels, out_channels, ...[, ...])

GNOBlock implements a Graph Neural Operator layer as described in [R61402ca715ea-1].


IntegralTransform([channel_mlp, ...])

Integral Kernel Transform (GNO) Computes one of the following: (a) int_{A(x)} k(x, y) dy (b) int_{A(x)} k(x, y) * f(y) dy (c) int_{A(x)} k(x, y, f(y)) dy (d) int_{A(x)} k(x, y, f(y)) * f(y) dy


We also provide additional layers that implement standard deep learning architectures as neural operators.

Local Integral/Differential Convolutions

FiniteDifferenceConvolution(in_channels, ...)

Finite Difference Convolution Layer introduced in [R2ffab17b35f8-1]. "Neural Operators with Localized Integral and Differential Kernels" (ICML 2024) https://arxiv.org/abs/2402.16845 .

Discrete-Continuous (DISCO) Convolutions

DiscreteContinuousConv2d(in_channels, ...[, ...])

Discrete-continuous convolutions (DISCO) on arbitrary 2d grids as implemented in [R263f5710516c-1].

DiscreteContinuousConvTranspose2d(...[, ...])

Transpose variant of discrete-continuous convolutions on arbitrary 2d grids as implemented for [R7aedc2806d9a-1].

EquidistantDiscreteContinuousConv2d(...[, ...])

Discrete-continuous convolutions (DISCO) on equidistant 2d grids as implemented for [R9a510dbeca5b-1].

EquidistantDiscreteContinuousConvTranspose2d(...)

Transpose Discrete-continuous convolutions (DISCO) on equidistant 2d grids as implemented for [R8275a1b61e46-1].

Local NO Blocks

LocalNOBlocks(in_channels, out_channels, ...)

LocalNOBlocks implements a sequence of Fourier layers, the operations of which are first described in [R367398f5802a-1].

Codomain Attention (Transformer) Blocks

CODALayer(n_modes[, n_heads, ...])

Co-domain Attention Blocks (CODALayer) implement the transformer architecture in the operator learning framework, as described in [Re703e87ec801-1].


Embeddings

Apply positional embeddings as additional channels on a function:

GridEmbeddingND(in_channels[, dim, ...])

GridEmbeddingND applies a simple positional embedding as a regular ND grid.

GridEmbedding2D(in_channels[, grid_boundaries])

GridEmbedding2D applies a simple positional embedding as a regular 2D grid.

SinusoidalEmbedding(in_channels[, ...])

SinusoidalEmbedding provides a unified sinusoidal positional embedding in the styles of Transformers [R2f544174e18d-1] and Neural Radiance Fields (NERFs) [R2f544174e18d-2].


Neighbor search

Find neighborhoods on arbitrary coordinate meshes:

NeighborSearch([use_open3d, return_norm])

Neighborhood search between two arbitrary coordinate meshes.

native_neighbor_search(data, queries, radius)

Native PyTorch implementation of a neighborhood search between two arbitrary coordinate meshes.


Other resolution-invariant operations

Positional embedding layers:

GridEmbeddingND(in_channels[, dim, ...])

GridEmbeddingND applies a simple positional embedding as a regular ND grid.

SinusoidalEmbedding(in_channels[, ...])

SinusoidalEmbedding provides a unified sinusoidal positional embedding in the styles of Transformers [R2f544174e18d-1] and Neural Radiance Fields (NERFs) [R2f544174e18d-2].

Automatically apply resolution dependent domain padding:

DomainPadding(domain_padding[, ...])

Applies domain padding scaled automatically to the input's resolution


SoftGating(in_features[, out_features, ...])

Applies soft-gating by weighting the channels of the given input

skip_connection(in_features, out_features[, ...])

A wrapper for several types of skip connections.


Model Dispatching

We provide a utility function to create model instances from a configuration. It has the advantage of doing some checks on the parameters it receives.

get_model(config)

Returns an instantiated model for the given config

available_models()

List the available neural operators


Training

We provide functionality that automates the boilerplate code associated with training a machine learning model to minimize a loss function on a dataset:

Trainer(*, model, n_epochs[, wandb_log, ...])

A general Trainer class to train neural-operators on given datasets.

IncrementalFNOTrainer(model, n_epochs[, ...])

IncrementalFNOTrainer subclasses the Trainer to implement specific logic for the Incremental-FNO as described in [Rb82b7576506a-1].


LpLoss([d, p, measure, reduction, eps])

LpLoss provides the L-p norm between two discretized d-dimensional functions.

H1Loss([d, measure, reduction, eps, ...])

H1Loss provides the H1 Sobolev norm between two d-dimensional discretized functions.


Data

In neuralop.data, we provide APIs for standardizing PDE datasets (.datasets) and transforming raw data into model inputs (.transforms).

We also ship a small dataset for testing:

load_darcy_flow_small(n_train, n_tests, ...)

We provide downloadable datasets for Darcy-Flow, Navier-Stokes, and Car-CFD, as well as a general-purpose tensor dataset.

DarcyDataset(root_dir, n_train, n_tests, ...)

DarcyDataset stores data generated according to Darcy's Law.

NavierStokesDataset(root_dir, n_train, ...)

NavierStokesDataset stores data generated according to the 2d incompressible Navier-Stokes equations.

CarCFDDataset(root_dir[, n_train, n_test, ...])

CarCFDDataset is a processed version of the dataset introduced in [Rfaac2f8b9be8-1], which encodes a triangular mesh over the surface of a 3D model car and provides the air pressure at each centroid and vertex of the mesh when the car is placed in a simulated wind tunnel with a recorded inlet velocity.

TensorDataset(x, y[, transform_x, transform_y])


DataProcessors

Much like PyTorch’s Torchvision.Datasets module, our data module also includes utilities to transform data from its raw form into the form expected by models and loss functions:

DefaultDataProcessor([in_normalizer, ...])

DefaultDataProcessor is a simple processor to pre/post process data before training/inferencing a model.

MGPatchingDataProcessor(model, levels, ...)