neuralop.losses.data_losses.HdivLoss

class neuralop.losses.data_losses.HdivLoss(d=1, measure=1.0, reduction='sum', eps=1e-08, periodic_in_x=True, periodic_in_y=True, periodic_in_z=True)[source]

Hdiv Sobolev norm between two d-dimensional discretized functions.

Note

In function space, the Sobolev norm is an integral over the entire domain. To ensure the norm converges to the integral, we scale the matrix norm by quadrature weights along each spatial dimension.

If no quadrature is passed at a call to HdivLoss, we assume a regular discretization and take 1 / measure as the quadrature weights.

Parameters:
dint, optional

dimension of input functions, by default 1

measurefloat or list, optional

measure of the domain, by default 1.0 either single scalar for each dim, or one per dim

To perform quadrature, HdivLoss scales measure by the size of each spatial dimension of x, and multiplies them with ||x-y||, such that the final norm is a scaled average over the spatial dimensions of x.

reductionstr, optional

whether to reduce across the batch and channel dimension by summing (‘sum’) or averaging (‘mean’)

epsfloat, optional

small number added to the denominator for numerical stability when using the relative loss

periodic_in_xbool, optional

whether to use periodic boundary conditions in x-direction when computing finite differences: - True: periodic in x (default) - False: non-periodic in x with forward/backward differences at boundaries by default True

periodic_in_ybool, optional

whether to use periodic boundary conditions in y-direction when computing finite differences: - True: periodic in y (default) - False: non-periodic in y with forward/backward differences at boundaries by default True

periodic_in_zbool, optional

whether to use periodic boundary conditions in z-direction when computing finite differences: - True: periodic in z (default) - False: non-periodic in z with forward/backward differences at boundaries by default True

Attributes:
name

Methods

__call__(y_pred, y[, quadrature, take_root])

abs(x, y[, quadrature, take_root])

absolute Hdiv norm

compute_terms(x, y, quadrature)

compute_terms computes the necessary finite-difference derivative terms for computing the Hdiv norm: it will return x and y along with their divergence terms.

reduce_all(x)

reduce x across the batch according to self.reduction

rel(x, y[, quadrature, take_root])

relative Hdiv norm

uniform_quadrature(x)

uniform_quadrature creates quadrature weights scaled by the spatial size of x to ensure that LpLoss computes the average over spatial dims.

compute_terms(x, y, quadrature)[source]

compute_terms computes the necessary finite-difference derivative terms for computing the Hdiv norm: it will return x and y along with their divergence terms.

Parameters:
xtorch.Tensor

inputs

ytorch.Tensor

targets

quadratureint or list

quadrature weights

uniform_quadrature(x)[source]

uniform_quadrature creates quadrature weights scaled by the spatial size of x to ensure that LpLoss computes the average over spatial dims.

Parameters:
xtorch.Tensor

input data

Returns:
quadraturelist

list of quadrature weights per-dim

reduce_all(x)[source]

reduce x across the batch according to self.reduction

abs(x, y, quadrature=None, take_root=True)[source]

absolute Hdiv norm

Parameters:
xtorch.Tensor

inputs

ytorch.Tensor

targets

quadraturefloat or list, optional

quadrature constant for reduction along each dim, by default None

take_rootbool, optional

whether to take the square root of the norm, by default True

rel(x, y, quadrature=None, take_root=True)[source]

relative Hdiv norm

Parameters:
xtorch.Tensor

inputs

ytorch.Tensor

targets

quadraturefloat or list, optional

quadrature constant for reduction along each dim, by default None

take_rootbool, optional

whether to take the square root of the norm, by default True