API reference
neuralop
: Neural Operators in Python
Models
In neuralop.models
, we provide neural operator models you can directly use on your applications.
FNO
We provide a general Fourier Neural Operator (TFNO) that supports most usecases.
We have a generic interface that works for any dimension, which is inferred based on n_modes (a tuple with the number of modes to keep in the Fourier domain for each dimension.)
|
N-Dimensional Fourier Neural Operator. |
We also have dimension-specific classes:
|
1D Fourier Neural Operator |
|
2D Fourier Neural Operator |
|
3D Fourier Neural Operator |
Tensorized FNO (TFNO)
N-D version:
|
N-Dimensional Fourier Neural Operator. |
Dimension-specific classes:
|
1D Fourier Neural Operator |
|
2D Fourier Neural Operator |
|
3D Fourier Neural Operator |
Spherical Fourier Neural Operators (SFNO)
|
N-Dimensional Spherical Fourier Neural Operator. |
Geometry-Informed Neural Operators (GINO)
|
GINO: Geometry-informed Neural Operator. |
U-shaped Neural Operators (U-NO)
|
U-Shaped Neural Operator [1]_ |
Layers
In addition to the full architectures, we also provide
in neuralop.layers
building blocks,
in the form of PyTorch layers, that you can use to build your own models:
Neural operator layers
Spectral convolutions (in Fourier domain):
General SpectralConv layer:
|
Generic N-Dimensional Fourier Neural Operator |
Dimension-specific versions:
|
1D Spectral Conv |
|
2D Spectral Conv, see |
|
3D Spectral Conv, see |
Spherical convolutions: (using Spherical Harmonics)
|
Spherical Convolution, base class for the SFNO [Radd7fd10dc7a-1] |
To support geometry-informed (GINO) models, we also offer the ability to integrate kernels in the spatial domain, which we formulate as mappings between arbitrary coordinate meshes.
Graph convolutions and kernel integration:
|
GNOBlock implements a Graph Neural Operator layer as described in _[1]. |
|
Integral Kernel Transform (GNO) Computes one of the following: (a) int_{A(x)} k(x, y) dy (b) int_{A(x)} k(x, y) * f(y) dy (c) int_{A(x)} k(x, y, f(y)) dy (d) int_{A(x)} k(x, y, f(y)) * f(y) dy |
Embeddings
Apply positional embeddings as additional channels on a function:
|
A positional embedding as a regular ND grid |
|
A simple positional embedding as a regular 2D grid |
|
SinusoidalEmbedding provides a unified sinusoidal positional embedding in the styles of Transformers George, R., Zhao, J., Kossaifi, J., Li, Z., and Anandkumar, A. (2024) and Neural Radiance Fields (NERFs) Mildenhall, B. et al (2020). |
Neighbor search
Find neighborhoods on arbitrary coordinate meshes:
|
Neighborhood search between two arbitrary coordinate meshes. |
|
Native PyTorch implementation of a neighborhood search between two arbitrary coordinate meshes. |
Other resolution-invariant operations
Positional embedding layers:
|
A positional embedding as a regular ND grid |
|
SinusoidalEmbedding provides a unified sinusoidal positional embedding in the styles of Transformers George, R., Zhao, J., Kossaifi, J., Li, Z., and Anandkumar, A. (2024) and Neural Radiance Fields (NERFs) Mildenhall, B. et al (2020). |
Automatically apply resolution dependent domain padding:
|
Applies domain padding scaled automatically to the input's resolution |
|
Applies soft-gating by weighting the channels of the given input |
|
A wrapper for several types of skip connections. |
Model Dispatching
We provide a utility function to create model instances from a configuration. It has the advantage of doing some checks on the parameters it receives.
|
Returns an instantiated model for the given config |
List the available neural operators |
Training
We provide functionality that automates the boilerplate code associated with training a machine learning model to minimize a loss function on a dataset:
|
A general Trainer class to train neural-operators on given datasets |
|
IncrementalFNOTrainer subclasses the Trainer to implement specific logic for the Incremental-FNO as described in [1]_. |
|
LpLoss provides the L-p norm between two discretized d-dimensional functions |
|
H1Loss provides the H1 Sobolev norm between two d-dimensional discretized functions |
|
MSELoss computes absolute mean-squared error between two tensors. |
Data
In neuralop.data, we provide APIs for standardizing PDE datasets (.datasets) and transforming raw data into model inputs (.transforms).
We also ship a small dataset for testing:
|
DataProcessors
Much like PyTorch’s Torchvision.Datasets module, our data module also includes utilities to transform data from its raw form into the form expected by models and loss functions:
|
DefaultDataProcessor is a simple processor to pre/post process data before training/inferencing a model. |
|