neuralop.training.IncrementalFNOTrainer

class neuralop.training.IncrementalFNOTrainer(model: Module, n_epochs: int, wandb_log: bool = False, device: str = 'cpu', mixed_precision: bool = False, data_processor: Module = None, eval_interval: int = 1, log_output: bool = False, use_distributed: bool = False, verbose: bool = False, incremental_grad: bool = False, incremental_loss_gap: bool = False, incremental_grad_eps: float = 0.001, incremental_buffer: int = 5, incremental_max_iter: int = 1, incremental_grad_max_iter: int = 10, incremental_loss_eps: float = 0.001)[source]

Trainer for the Incremental Fourier Neural Operator (iFNO)

Implements iFNO approach from [1] that progressively increases Fourier modes during training. This class supports two algorithms:

  1. Loss Gap (incremental_loss_gap=True): Increases modes when loss improvement becomes too small

  2. Gradient-based (incremental_grad=True): Uses explained variance of gradient strengths to determine when more modes are needed

Parameters:
modelnn.Module

FNO or TFNO model to train.

n_epochsint

Total number of training epochs.

incremental_gradbool, optional

Use gradient-based algorithm, by default False.

incremental_loss_gapbool, optional

Use loss gap algorithm, by default False.

incremental_grad_epsfloat, optional

Explained variance threshold for gradient algorithm, by default 0.001.

incremental_loss_epsfloat, optional

Loss improvement threshold for loss gap algorithm, by default 0.001.

incremental_grad_max_iterint, optional

Iterations for gradient accumulation, by default 10.

incremental_bufferint, optional

Buffer size for gradient accumulation, by default 5.

Methods

grad_explained()

Gradient-based explained variance algorithm for incremental learning.

incremental_update([loss])

Main incremental update function that determines which algorithm to run.

loss_gap(loss)

Loss gap algorithm for incremental learning.

train_one_epoch(epoch, train_loader, ...)

Train the model for one epoch with incremental learning.

Notes

  • Exactly one algorithm must be enabled (not both)

  • Gradient algorithm requires multiple iterations for statistics

  • Both algorithms respect maximum modes in FNO model

References

[1]

George, R., Zhao, J., Kossaifi, J., Li, Z., and Anandkumar, A. (2024) “Incremental Spatial and Spectral Learning of Neural Operators for Solving Large-Scale PDEs”. TMLR, https://openreview.net/pdf?id=xI6cPQObp0.

incremental_update(loss=None)[source]

Main incremental update function that determines which algorithm to run.

This method is called after each training epoch to potentially increase the number of Fourier modes in the FNO model based on the selected incremental algorithm.

Parameters:
lossfloat or torch.Tensor, optional

Current training loss value. Required for loss gap algorithm. If None and loss gap algorithm is enabled, no update will occur.

train_one_epoch(epoch, train_loader, training_loss)[source]

Train the model for one epoch with incremental learning.

Extends base trainer by adding incremental learning updates after each epoch. May increase Fourier modes based on training progress.

Parameters:
epochint

Current epoch number.

train_loadertorch.utils.data.DataLoader

DataLoader containing training data.

training_losscallable

Loss function to use for training.

Returns:
tuple

(train_err, avg_loss, avg_lasso_loss, epoch_train_time)

loss_gap(loss)[source]

Loss gap algorithm for incremental learning.

Monitors training loss convergence and increases Fourier modes when loss improvement becomes too small. Helps escape local minima by increasing model capacity.

Algorithm: 1. Track training losses over epochs 2. Compute difference between consecutive losses 3. If difference < threshold, increase modes by 1 4. Update FNO blocks with new mode count

Parameters:
lossfloat or torch.Tensor

Current epoch’s training loss value.

grad_explained()[source]

Gradient-based explained variance algorithm for incremental learning.

Analyzes gradient patterns of FNO weights to determine when additional Fourier modes are needed by computing explained variance of gradient strengths.

Algorithm: 1. Accumulate gradients over multiple iterations 2. Compute Frobenius norm of gradients for each Fourier mode 3. Compute explained variance of gradient strengths 4. If explained variance < threshold, increase modes 5. Reset accumulation and update model