neuralop.data.transforms.normalizers
.UnitGaussianNormalizer
- class neuralop.data.transforms.normalizers.UnitGaussianNormalizer(mean=None, std=None, eps=1e-07, dim=None, mask=None)[source]
UnitGaussianNormalizer normalizes data to be zero mean and unit std.
- Parameters:
- meantorch.tensor or None
has to include batch-size as a dim of 1 e.g. for tensors of shape
(batch_size, channels, height, width)
, the mean over height and width should have shape(1, channels, 1, 1)
- stdtorch.tensor or None
- epsfloat, default is 0
for safe division by the std
- dimint list, default is None
if not None, dimensions of the data to reduce over to compute the mean and std.
Important
Has to include the batch-size (typically 0). For instance, to normalize data of shape
(batch_size, channels, height, width)
along batch-size, height and width, passdim=[0, 2, 3]
- masktorch.Tensor or None, default is None
If not None, a tensor with the same size as a sample, with value 0 where the data should be ignored and 1 everywhere else
Methods
cpu
()Move all model parameters and buffers to the CPU.
cuda
()Move all model parameters and buffers to the GPU.
forward
(x)Define the computation performed at every call.
from_dataset
(dataset[, dim, keys, mask])Return a dictionary of normalizer instances, fitted on the given dataset
to
(device)Move and/or cast the parameters and buffers.
fit
incremental_update_mean_std
inverse_transform
partial_fit
transform
update_mean_std
Notes
The resulting mean will have the same size as the input MINUS the specified dims. If you do not specify any dims, the mean and std will both be scalars.
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- cuda()[source]
Move all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing the optimizer if the module will live on GPU while being optimized.
Note
This method modifies the module in-place.
- Args:
- device (int, optional): if specified, all parameters will be
copied to that device
- Returns:
Module: self
- cpu()[source]
Move all model parameters and buffers to the CPU.
Note
This method modifies the module in-place.
- Returns:
Module: self
- to(device)[source]
Move and/or cast the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)[source]
- to(dtype, non_blocking=False)[source]
- to(tensor, non_blocking=False)[source]
- to(memory_format=torch.channels_last)[source]
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Args:
- device (
torch.device
): the desired device of the parameters and buffers in this module
- dtype (
torch.dtype
): the desired floating point or complex dtype of the parameters and buffers in this module
- tensor (torch.Tensor): Tensor whose dtype and device are the desired
dtype and device for all parameters and buffers in this module
- memory_format (
torch.memory_format
): the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- device (
- Returns:
Module: self
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- classmethod from_dataset(dataset, dim=None, keys=None, mask=None)[source]
Return a dictionary of normalizer instances, fitted on the given dataset
- Parameters:
- datasetpytorch dataset
each element must be a dict {key: sample} e.g. {‘x’: input_samples, ‘y’: target_labels}
- dimint list, default is None
If None, reduce over all dims (scalar mean and std)
Otherwise, must include batch-dimensions and all over dims to reduce over
- keysstr list or None
if not None, a normalizer is instanciated only for the given keys