API reference
neuralop
: Neural Operators in Python
Models
In neuralop.models
, we provide neural operator models you can directly use on your applications.
FNO
We provide a general Fourier Neural Operator (TFNO) that supports most usecases.
We have a generic interface that works for any dimension, which is inferred based on n_modes (a tuple with the number of modes to keep in the Fourier domain for each dimension.)
|
N-Dimensional Fourier Neural Operator |
We also have dimension-specific classes:
|
1D Fourier Neural Operator |
|
2D Fourier Neural Operator |
|
3D Fourier Neural Operator |
Tensorized FNO (TFNO)
N-D version:
|
N-Dimensional Fourier Neural Operator |
Dimension-specific classes:
|
1D Fourier Neural Operator |
|
2D Fourier Neural Operator |
|
3D Fourier Neural Operator |
Spherical Fourier Neural Operators (SFNO)
|
N-Dimensional Spherical Fourier Neural Operator |
U-shaped Neural Operators (U-NO)
|
U-Shaped Neural Operator [1]_ |
Layers
In addition to the full architectures, we also provide
in neuralop.layers
building blocks,
in the form of PyTorch layers, that you can use to build your own models:
Spectral convolutions (in Fourier domain):
|
Generic N-Dimensional Fourier Neural Operator |
|
1D Spectral Conv |
|
2D Spectral Conv, see |
|
3D Spectral Conv, see |
Spherical convolutions: (using Spherical Harmonics)
|
Spherical Convolution, base class for the SFNO [Radd7fd10dc7a-1] |
Automatically apply resolution dependent domain padding:
|
Applies domain padding scaled automatically to the input's resolution |
|
Applies soft-gating by weighting the channels of the given input |
|
A wrapper for several types of skip connections. |
Model Dispatching
We provide a utility function to create model instances from a configuration. It has the advantage of doing some checks on the parameters it receives.
|
Returns an instantiated model for the given config |
List the available neural operators |
Training
We provide functionality that automates the boilerplate code associated with training a machine learning model to minimize a loss function on a dataset:
|
Methods |
The general case (assuming no modifications) is covered above. To implement domain-specific logic in your training loop while still using the automation and logging provided by a Trainer, we provide a Callback class and several examples of common domain-specific Callbacks.
Callbacks store all non-essential logic required to run specific training scripts.
The callbacks in this module follow the form and logic of callbacks in Pytorch-Lightning (https://lightning.ai/docs/pytorch/stable)
|
Base callback class. |
|
Callback that implements simple logging functionality expected when passing verbose to a Trainer |
|
Methods |
Datasets
We ship a small dataset for testing:
|
Loads a small Darcy-Flow dataset |
Much like PyTorch’s Torchvision.Datasets module, our Datasets module also includes utilities to transform data from its raw form into the form expected by models and loss functions:
|
Methods |
|
Methods |