API reference
neuralop
: Neural Operators in Python
Models
In neuralop.models
, we provide neural operator models you can directly use on your applications.
FNO
We provide a general Fourier Neural Operator (TFNO) that supports most usecases.
We have a generic interface that works for any dimension, which is inferred based on n_modes (a tuple with the number of modes to keep in the Fourier domain for each dimension.)
|
N-Dimensional Fourier Neural Operator. |
We also have dimension-specific classes:
|
1D Fourier Neural Operator |
|
2D Fourier Neural Operator |
|
3D Fourier Neural Operator |
Tensorized FNO (TFNO)
N-D version:
|
N-Dimensional Fourier Neural Operator. |
Dimension-specific classes:
|
1D Fourier Neural Operator |
|
2D Fourier Neural Operator |
|
3D Fourier Neural Operator |
Spherical Fourier Neural Operators (SFNO)
|
N-Dimensional Spherical Fourier Neural Operator. |
Geometry-Informed Neural Operators (GINO)
|
GINO: Geometry-informed Neural Operator. |
U-shaped Neural Operators (U-NO)
|
U-Shaped Neural Operator, as described in [Ra5b933fe5a53-1]. |
Layers
In addition to the full architectures, we also provide
in neuralop.layers
building blocks,
in the form of PyTorch layers, that you can use to build your own models:
Neural operator layers
Spectral convolutions (in Fourier domain):
General SpectralConv layer:
|
SpectralConv implements the Spectral Convolution component of a Fourier layer described in [R17685265b205-1] and [R17685265b205-2]. |
Dimension-specific versions:
|
1D Spectral Conv |
|
2D Spectral Conv, see |
|
3D Spectral Conv, see |
Spherical convolutions: (using Spherical Harmonics)
|
Spherical Convolution, base class for the SFNO [Radd7fd10dc7a-1] |
To support geometry-informed (GINO) models, we also offer the ability to integrate kernels in the spatial domain, which we formulate as mappings between arbitrary coordinate meshes.
Graph convolutions and kernel integration:
|
GNOBlock implements a Graph Neural Operator layer as described in [R61402ca715ea-1]. |
|
Integral Kernel Transform (GNO) Computes one of the following: (a) int_{A(x)} k(x, y) dy (b) int_{A(x)} k(x, y) * f(y) dy (c) int_{A(x)} k(x, y, f(y)) dy (d) int_{A(x)} k(x, y, f(y)) * f(y) dy |
We also provide additional layers that implement standard deep learning architectures as neural operators.
Local Integral/Differential Convolutions
|
Finite Difference Convolution Layer introduced in [R2ffab17b35f8-1]. "Neural Operators with Localized Integral and Differential Kernels" (ICML 2024) https://arxiv.org/abs/2402.16845 . |
Discrete-Continuous (DISCO) Convolutions
|
Discrete-continuous convolutions (DISCO) on arbitrary 2d grids as implemented in [R263f5710516c-1]. |
|
Transpose variant of discrete-continuous convolutions on arbitrary 2d grids as implemented for [R7aedc2806d9a-1]. |
|
Discrete-continuous convolutions (DISCO) on equidistant 2d grids as implemented for [R9a510dbeca5b-1]. |
Transpose Discrete-continuous convolutions (DISCO) on equidistant 2d grids as implemented for [R8275a1b61e46-1]. |
Local FNO Blocks
|
LocalFNOBlocks implements a sequence of Fourier layers, the operations of which are first described in [R2682a61a8277-1]. |
Codomain Attention (Transformer) Blocks
|
Co-domain Attention Blocks (CODABlocks) implement the transformer architecture in the operator learning framework, as described in [R5bec054e9579-1]. |
Embeddings
Apply positional embeddings as additional channels on a function:
|
A positional embedding as a regular ND grid |
|
A simple positional embedding as a regular 2D grid |
|
SinusoidalEmbedding provides a unified sinusoidal positional embedding in the styles of Transformers [R2f544174e18d-1] and Neural Radiance Fields (NERFs) [R2f544174e18d-2]. |
Neighbor search
Find neighborhoods on arbitrary coordinate meshes:
|
Neighborhood search between two arbitrary coordinate meshes. |
|
Native PyTorch implementation of a neighborhood search between two arbitrary coordinate meshes. |
Other resolution-invariant operations
Positional embedding layers:
|
A positional embedding as a regular ND grid |
|
SinusoidalEmbedding provides a unified sinusoidal positional embedding in the styles of Transformers [R2f544174e18d-1] and Neural Radiance Fields (NERFs) [R2f544174e18d-2]. |
Automatically apply resolution dependent domain padding:
|
Applies domain padding scaled automatically to the input's resolution |
|
Applies soft-gating by weighting the channels of the given input |
|
A wrapper for several types of skip connections. |
Model Dispatching
We provide a utility function to create model instances from a configuration. It has the advantage of doing some checks on the parameters it receives.
|
Returns an instantiated model for the given config |
List the available neural operators |
Training
We provide functionality that automates the boilerplate code associated with training a machine learning model to minimize a loss function on a dataset:
|
A general Trainer class to train neural-operators on given datasets |
|
IncrementalFNOTrainer subclasses the Trainer to implement specific logic for the Incremental-FNO as described in [Rb82b7576506a-1]. |
|
LpLoss provides the L-p norm between two discretized d-dimensional functions. |
|
H1Loss provides the H1 Sobolev norm between two d-dimensional discretized functions. |
Data
In neuralop.data, we provide APIs for standardizing PDE datasets (.datasets) and transforming raw data into model inputs (.transforms).
We also ship a small dataset for testing:
|
We provide downloadable datasets for Darcy-Flow, Navier-Stokes, and Car-CFD.
|
DarcyDataset stores data generated according to Darcy's Law. |
|
NavierStokesDataset stores data generated according to the 2d incompressible Navier-Stokes equations. |
|
CarCFDDataset is a processed version of the dataset introduced in [Rfaac2f8b9be8-1], which encodes a triangular mesh over the surface of a 3D model car and provides the air pressure at each centroid and vertex of the mesh when the car is placed in a simulated wind tunnel with a recorded inlet velocity. |
DataProcessors
Much like PyTorch’s Torchvision.Datasets module, our data module also includes utilities to transform data from its raw form into the form expected by models and loss functions:
|
DefaultDataProcessor is a simple processor to pre/post process data before training/inferencing a model. |
|