mindspore.nn.L1Loss

class mindspore.nn.L1Loss(reduction='mean')[source]

L1Loss is used to calculate the mean absolute error between the predicted value and the target value.

Assuming that the \(x\) and \(y\) are 1-D Tensor, length \(N\), then calculate the loss of \(x\) and \(y\) without dimensionality reduction (the reduction parameter is set to “none”). The formula is as follows:

\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad \text{with } l_n = \left| x_n - y_n \right|,\]

where \(N\) is the batch size. If reduction is not ‘none’, then:

\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
Parameters

reduction (str) – Type of reduction to be applied to loss. The optional values are “mean”, “sum”, and “none”. Default: “mean”. If reduction is “mean” or “sum”, then output a scalar Tensor, if reduction is “none”, the shape of the output Tensor is the broadcasted shape.

Inputs:
  • logits (Tensor) - Predicted value, Tensor of any dimension.

  • labels (Tensor) - Target value, same shape as the logits in common cases. However, it supports the shape of logits is different from the shape of labels and they should be broadcasted to each other.

Outputs:

Tensor, data type is float.

Raises
  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

  • ValueError – If logits and labels have different shapes and cannot be broadcasted to each other.

Supported Platforms:

Ascend GPU CPU

Examples

>>> # Case 1: logits.shape = labels.shape = (3,)
>>> loss = nn.L1Loss()
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
0.33333334
>>> # Case 2: logits.shape = (3,), labels.shape = (2, 3)
>>> loss = nn.L1Loss(reduction='none')
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([[1, 1, 1], [1, 2, 2]]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
[[0. 1. 2.]
 [0. 0. 1.]]