API reference

neuralop: Neural Operators in Python

Models

In neuralop.models, we provide neural operator models you can directly use on your applications.

FNO

We provide a general Fourier Neural Operator (TFNO) that supports most usecases.

We have a generic interface that works for any dimension, which is inferred based on n_modes (a tuple with the number of modes to keep in the Fourier domain for each dimension.)

FNO(*args, **kwargs)

N-Dimensional Fourier Neural Operator

We also have dimension-specific classes:

FNO1d(*args, **kwargs)

1D Fourier Neural Operator

FNO2d(*args, **kwargs)

2D Fourier Neural Operator

FNO3d(*args, **kwargs)

3D Fourier Neural Operator

Tensorized FNO (TFNO)

N-D version:

TFNO(*args, **kwargs)

N-Dimensional Fourier Neural Operator

Dimension-specific classes:

TFNO1d(*args, **kwargs)

1D Fourier Neural Operator

TFNO2d(*args, **kwargs)

2D Fourier Neural Operator

TFNO3d(*args, **kwargs)

3D Fourier Neural Operator

Spherical Fourier Neural Operators (SFNO)

SFNO(*args, **kwargs)

N-Dimensional Spherical Fourier Neural Operator

U-shaped Neural Operators (U-NO)

UNO(in_channels, out_channels, hidden_channels)

U-Shaped Neural Operator [1]_

Layers

In addition to the full architectures, we also provide in neuralop.layers building blocks, in the form of PyTorch layers, that you can use to build your own models:

Spectral convolutions (in Fourier domain):

SpectralConv(in_channels, out_channels, n_modes)

Generic N-Dimensional Fourier Neural Operator

SpectralConv1d(in_channels, out_channels, ...)

1D Spectral Conv

SpectralConv2d(in_channels, out_channels, ...)

2D Spectral Conv, see neuralop.layers.SpectraConv for the general case

SpectralConv3d(in_channels, out_channels, ...)

3D Spectral Conv, see neuralop.layers.SpectraConv for the general case

Spherical convolutions: (using Spherical Harmonics)

SphericalConv(in_channels, out_channels, n_modes)

Spherical Convolution, base class for the SFNO [Radd7fd10dc7a-1]

Automatically apply resolution dependent domain padding:

DomainPadding(domain_padding[, ...])

Applies domain padding scaled automatically to the input's resolution

SoftGating(in_features[, out_features, ...])

Applies soft-gating by weighting the channels of the given input

skip_connection(in_features, out_features[, ...])

A wrapper for several types of skip connections.

Model Dispatching

We provide a utility function to create model instances from a configuration. It has the advantage of doing some checks on the parameters it receives.

get_model(config)

Returns an instantiated model for the given config

available_models()

List the available neural operators

Training

We provide functionality that automates the boilerplate code associated with training a machine learning model to minimize a loss function on a dataset:

Trainer(*, model, n_epochs[, wandb_log, ...])

Methods

The general case (assuming no modifications) is covered above. To implement domain-specific logic in your training loop while still using the automation and logging provided by a Trainer, we provide a Callback class and several examples of common domain-specific Callbacks.

Callbacks store all non-essential logic required to run specific training scripts.

The callbacks in this module follow the form and logic of callbacks in Pytorch-Lightning (https://lightning.ai/docs/pytorch/stable)

Callback()

Base callback class.

BasicLoggerCallback([wandb_kwargs])

Callback that implements simple logging functionality expected when passing verbose to a Trainer

CheckpointCallback(save_dir[, save_best, ...])

Methods

Datasets

We ship a small dataset for testing:

load_darcy_flow_small(n_train, n_tests, ...)

Loads a small Darcy-Flow dataset

Much like PyTorch’s Torchvision.Datasets module, our Datasets module also includes utilities to transform data from its raw form into the form expected by models and loss functions:

DefaultDataProcessor([in_normalizer, ...])

Methods

MGPatchingDataProcessor(model, levels, ...)

Methods