API reference
neuralop
: Neural Operators in Python
Models
In neuralop.models
, we provide neural operator models you can directly use on your applications.
FNO
We provide a general Fourier Neural Operator (FNO) that supports most usecases.
It works for any dimension, which is inferred based on n_modes (a tuple with the number of modes to keep in the Fourier domain for each dimension.)
|
N-Dimensional Fourier Neural Operator. |
Tensorized FNO (TFNO)
|
Tucker Tensorized Fourier Neural Operator (TFNO). |
Spherical Fourier Neural Operators (SFNO)
|
N-Dimensional Spherical Fourier Neural Operator. |
Geometry-Informed Neural Operators (GINO)
|
GINO: Geometry-informed Neural Operator - learns a mapping between functions presented over arbitrary coordinate meshes. |
Local Neural Operators (LocalNO)
|
N-Dimensional Local Fourier Neural Operator. |
U-shaped Neural Operators (U-NO)
|
U-Shaped Neural Operator |
Uncertainty Quantification Neural Operators (UQNO)
|
Uncertainty Quantification Neural Operator |
Fourier/Geometry Neural Operators (FNOGNO)
|
FNOGNO: Fourier/Geometry Neural Operator - maps from a regular N-d grid to an arbitrary query point cloud. |
Codomain Attention Neural Operators (CODANO)
|
Codomain Attention Neural Operators (CoDA-NO) |
Layers
In addition to the full architectures, we also provide
in neuralop.layers
building blocks,
in the form of PyTorch layers, that you can use to build your own models:
FNO Blocks
|
FNOBlocks implements a sequence of Fourier layers. |
Fourier Convolutions
|
SpectralConv implements the Spectral Convolution component of a Fourier layer described. |
Spherical Convolutions
|
Spherical Convolution for the SFNO. |
Graph convolutions and kernel integration
|
Graph Neural Operator layer |
|
Integral Kernel Transform (GNO). |
Local NO Blocks
|
Local Neural Operator blocks with localized integral and differential kernels. |
Local Integral/Differential Convolutions
|
Finite Difference Convolution Layer |
Discrete-Continuous (DISCO) Convolutions
|
Discrete-continuous convolutions (DISCO) on arbitrary 2d grids. |
|
Transpose variant of discrete-continuous convolutions on arbitrary 2d grids. |
|
Discrete-continuous convolutions (DISCO) on equidistant 2d grids. |
Transpose Discrete-continuous convolutions (DISCO) on equidistant 2d grids. |
Codomain Attention (Transformer) Blocks
|
Co-domain Attention Blocks (CODALayer) |
Channel MLP
|
Multi-layer perceptron applied channel-wise across spatial dimensions. |
Embeddings
Apply positional embeddings as additional channels on a function:
|
GridEmbeddingND applies a simple positional embedding as a regular ND grid. |
|
GridEmbedding2D applies a simple positional embedding as a regular 2D grid. |
|
Sinusoidal positional embedding for enriching coordinate inputs with spectral information. |
Neighbor search
Find neighborhoods on arbitrary coordinate meshes:
|
Neighborhood search between two arbitrary coordinate meshes. |
|
Native PyTorch implementation of a neighborhood search between two arbitrary coordinate meshes. |
Domain Padding
|
Applies domain padding scaled automatically to the input's resolution. |
Skip Connections
|
A wrapper for several types of skip connections. |
Normalization Layers
|
Adaptive Instance Normalization (AdaIN) layer for style transfer in neural operators. |
|
Dimension-agnostic instance normalization layer for neural operators. |
|
Dimension-agnostic batch normalization layer for neural operators. |
Complex-value Support
Functionality for handling complex-valued spatial data
|
Wrapper class that converts a standard nn.Module that operates on real data into a module that operates on complex-valued spatial data. |
Model Dispatching
We provide a utility function to create model instances from a configuration. It has the advantage of doing some checks on the parameters it receives.
|
Returns an instantiated model for the given config |
List the available neural operators |
Training
We provide functionality that automates the boilerplate code associated with training a machine learning model to minimize a loss function on a dataset:
|
A general Trainer class to train neural-operators on given datasets. |
|
Trainer for the Incremental Fourier Neural Operator (iFNO) |
Training Utilities
|
A convenience function to intialize the device, setup torch settings and check multi-grid and other values. |
Multi-Grid Patching
|
MultigridPatching2D wraps a model in multi-grid domain decomposition and patching. |
Loss Functions
Data Losses
data_losses.py contains code to compute standard data objective functions for training Neural Operators.
By default, losses expect arguments y_pred (model predictions) and y (ground y.)
|
LpLoss provides the Lp norm between two discretized d-dimensional functions. |
|
H1 Sobolev norm between two d-dimensional discretized functions. |
|
Hdiv Sobolev norm between two d-dimensional discretized functions. |
|
PointwiseQuantileLoss computes Quantile Loss. |
|
Mean-squared L2 error between two tensors. |
Equation Losses
Physics-informed loss functions:
|
Computes loss for Burgers' equation. |
|
Computes loss for initial value problems. |
|
PoissonInteriorLoss computes the loss on the interior points of model outputs according to Poisson's equation in 2d: ∇·((1 + 0.1u^2)∇u(x)) = f(x) |
|
|
|
PoissonEqnLoss computes a weighted sum of equation loss computed on the interior points of a model's output and a boundary loss computed on the boundary points. |
Meta Losses
Meta-losses for weighting composite loss functions.
|
Computes an average or weighted sum of given losses. |
|
Relative Loss Balancing with Random Lookback (ReLoBRaLo) algorithm for adaptive loss weighting. |
|
SoftAdapt algorithm for adaptive loss weighting and aggregation. |
Differentiation
Numerical differentiation utilities:
|
A unified class for computing Fourier/spectral derivatives in 1D, 2D, 3D. |
|
A unified class for computing finite differences in 1D, 2D, or 3D. |
|
Finite difference approximation of first order derivatives on unstructured point clouds. |
Spectral Projection
Spectral projection utilities for enforcing physical constraints:
Apply spectral projection layer to make a velocity field divergence-free. |
Data
In neuralop.data, we provide APIs for standardizing PDE datasets (.datasets) and transforming raw data into model inputs (.transforms).
Datasets
We ship a small dataset for testing:
|
We provide downloadable datasets for Darcy-Flow, Navier-Stokes, and Car-CFD.
|
DarcyDataset stores data generated according to Darcy's Law. |
|
NavierStokesDataset stores data generated according to the 2d incompressible Navier-Stokes equations. |
|
Processed version of the Car-CFD dataset. |
|
Burgers1dTimeDataset wraps data from the viscous Burger's equation in 1 spatial dimension. |
|
Legacy function to load mini Burger's equation dataset |
Note
Additional datasets are available with optional dependencies:
The Well Datasets: Large-scale collection of diverse physics simulations (requires the_well package)
Spherical Shallow Water Equations: For spherical coordinate systems (requires torch_harmonics package)
These datasets are conditionally imported and may not be available depending on your installation.
DataProcessors
Much like PyTorch’s Torchvision.Datasets module, our data module also includes utilities to transform data from its raw form into the form expected by models and loss functions:
|
DefaultDataProcessor is a simple processor to pre/post process data before training/inferencing a model. |
|
|
|
Normalizers
Data normalization utilities:
|
UnitGaussianNormalizer normalizes data to be zero mean and unit std. |
|
DictUnitGaussianNormalizer composes DictTransform and UnitGaussianNormalizer to normalize different fields of a model output tensor to Gaussian distributions w/ mean 0 and unit variance. |
Utility Functions
|
Returns the total number of parameters of a PyTorch model |
|
Returns the number of parameters (elements) in a single tensor, optionally, along certain dimensions only |
|
This function computes the spectrum of a 2D signal using the Fast Fourier Transform (FFT). |
|
|
|
|
|