neuralop.training.IncrementalFNOTrainer

class neuralop.training.IncrementalFNOTrainer(model: Module, n_epochs: int, wandb_log: bool = False, device: str = 'cpu', mixed_precision: bool = False, data_processor: Module | None = None, eval_interval: int = 1, log_output: bool = False, use_distributed: bool = False, verbose: bool = False, incremental_grad: bool = False, incremental_loss_gap: bool = False, incremental_grad_eps: float = 0.001, incremental_buffer: int = 5, incremental_max_iter: int = 1, incremental_grad_max_iter: int = 10, incremental_loss_eps: float = 0.001)[source]

IncrementalFNOTrainer subclasses the Trainer to implement specific logic for the Incremental-FNO as described in [1]_.

Methods

grad_explained()

incremental_update([loss])

loss_gap(loss)

loss_gap increases the model's incremental modes if the epoch's training loss does not decrease sufficiently

train_one_epoch(epoch, train_loader, ...)

train_one_epoch inherits from the base Trainer's method

References

George, R., Zhao, J., Kossaifi, J., Li, Z., and Anandkumar, A. (2024)

“Incremental Spatial and Spectral Learning of Neural Operators for Solving Large-Scale PDEs”. ArXiv preprint, https://arxiv.org/pdf/2211.15188

train_one_epoch(epoch, train_loader, training_loss)[source]
train_one_epoch inherits from the base Trainer’s method

and adds the computation of the incremental-FNO algorithm before returning the training epoch’s metrics.

Parameters:
epochint

epoch of training

train_loaderDataLoader
training_losscallable

loss function to train with

Returns:
train_err, avg_loss, avg_lasso_loss, epoch_train_time
loss_gap(loss)[source]

loss_gap increases the model’s incremental modes if the epoch’s training loss does not decrease sufficiently

Parameters:
lossfloat | scalar torch.Tensor

scalar value of epoch’s training loss