mindspore.ops.gaussian_nll_loss

View Source On Gitee
mindspore.ops.gaussian_nll_loss(x, target, var, full=False, eps=1e-06, reduction='mean')[source]

Gaussian negative log likelihood loss.

The target values are considered to be samples from a Gaussian distribution, where the expectation and variance are predicted by a neural network. For labels modeled on a Gaussian distribution, logits to record expectations, and the variance var (elements are all positive), the calculated loss is:

\[\text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var}, \ \text{eps}\right)\right) + \frac{\left(\text{x} - \text{target}\right)^2} {\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}\]

where \(eps\) is used for stability of \(log\). When \(full=True\), a constant will be added to the loss. If the shape of \(var\) and \(logits\) are not the same (due to a homoscedastic assumption), their shapes must allow correct broadcasting.

Parameters
  • x (Tensor) – Tensor of shape \((N, *)\) or \((*)\) where \(*\) means any number of additional dimensions.

  • target (Tensor) – Tensor of shape \((N, *)\) or \((*)\), same shape as the x, or same shape as the x but with one dimension equal to 1 (to allow broadcasting).

  • var (Tensor) – Tensor of shape \((N, *)\) or \((*)\), same shape as x, or same shape as the x but with one dimension equal to 1, or same shape as the x but with one fewer dimension (to allow for broadcasting).

  • full (bool, optional) – Include the constant term in the loss calculation. When \(full=True\), the constant term will be \(const = 0.5*log(2\pi)\). Default: False.

  • eps (float, optional) – Used to improve the stability of log function must be greater than 0. Default: 1e-6 .

  • reduction (str, optional) –

    Apply specific reduction method to the output: 'none' , 'mean' , 'sum' . Default: 'mean' .

    • 'none': no reduction will be applied.

    • 'mean': compute and return the mean of elements in the output.

    • 'sum': the output elements will be summed.

Returns

Tensor or Tensor scalar, the computed loss depending on \(reduction\).

Raises
  • TypeError – If x, target or var is not a Tensor.

  • TypeError – If full is not a bool.

  • TypeError – If eps is not a float.

  • ValueError – If eps is not a float within (0, inf).

  • ValueError – If reduction is not one of "none" , "mean" , "sum" .

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> import mindspore.common.dtype as mstype
>>> arr1 = np.arange(8).reshape((4, 2))
>>> arr2 = np.array([2, 3, 1, 4, 6, 4, 4, 9]).reshape((4, 2))
>>> x = Tensor(arr1, mstype.float32)
>>> var = Tensor(np.ones((4, 1)), mstype.float32)
>>> target = Tensor(arr2, mstype.float32)
>>> output = ops.gaussian_nll_loss(x, target, var)
>>> print(output)
1.4374993
Reference:

Nix, D. A. and Weigend, A. S., “Estimating the mean and variance of the target probability distribution”, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), Orlando, FL, USA, 1994, pp. 55-60 vol.1, doi: 10.1109/ICNN.1994.374138.