neuralop.data.transforms.data_processors
.DefaultDataProcessor
- class neuralop.data.transforms.data_processors.DefaultDataProcessor(in_normalizer=None, out_normalizer=None)[source]
DefaultDataProcessor is a simple processor to pre/post process data before training/inferencing a model.
Methods
forward
(**data_dict)forward call wraps a model to perform preprocessing, forward, and post- processing all in one call
postprocess
(output, data_dict)postprocess model outputs and data_dict into format expected by training or val loss
preprocess
(data_dict[, batched])preprocess a batch of data into the format expected in model's forward call
to
(device)Move and/or cast the parameters and buffers.
- to(device)[source]
Move and/or cast the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)[source]
- to(dtype, non_blocking=False)[source]
- to(tensor, non_blocking=False)[source]
- to(memory_format=torch.channels_last)[source]
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Args:
- device (
torch.device
): the desired device of the parameters and buffers in this module
- dtype (
torch.dtype
): the desired floating point or complex dtype of the parameters and buffers in this module
- tensor (torch.Tensor): Tensor whose dtype and device are the desired
dtype and device for all parameters and buffers in this module
- memory_format (
torch.memory_format
): the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- device (
- Returns:
Module: self
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- preprocess(data_dict, batched=True)[source]
preprocess a batch of data into the format expected in model’s forward call
By default, training loss is computed on normalized out and y and eval loss is computed on unnormalized out and y
- Parameters:
- data_dictdict
input data dictionary with at least keys ‘x’ (inputs) and ‘y’ (ground truth)
- batchedbool, optional
whether data contains a batch dim, by default True
- Returns:
- dict
preprocessed data_dict
- postprocess(output, data_dict)[source]
postprocess model outputs and data_dict into format expected by training or val loss
By default, training loss is computed on normalized out and y and eval loss is computed on unnormalized out and y
- Parameters:
- outputtorch.Tensor
raw model outputs
- data_dictdict
dictionary containing single batch of data
- Returns:
- out, data_dict
postprocessed outputs and data dict
- forward(**data_dict)[source]
forward call wraps a model to perform preprocessing, forward, and post- processing all in one call
- Returns:
- output, data_dict
postprocessed data for use in loss