mindspore.ops.l1_loss

mindspore.ops.l1_loss(input, target, reduction='mean')[source]

Calculate the mean absolute error between the input value and the target value.

Assuming that the \(x\) and \(y\) are 1-D Tensor, length \(N\), reduction is set to “none” , then calculate the loss of \(x\) and \(y\) without dimensionality reduction.

The formula is as follows:

\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad \text{with } l_n = \left| x_n - y_n \right|,\]

where \(N\) is the batch size.

If reduction is mean or sum, then:

\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
Parameters
  • input (Tensor) – Predicted value, Tensor of any dimension.

  • target (Tensor) – Target value, usually has the same shape as the input. If input and target have different shape, make sure they can broadcast to each other.

  • reduction (str, optional) – Type of reduction to be applied to loss. The optional value is “mean”, “sum” or “none”. Default: 'mean' .

Returns

Tensor or Scalar, if reduction is “none”, return a Tensor with same shape and dtype as input. Otherwise, a scalar value will be returned.

Raises
  • TypeError – If input is not a Tensor.

  • TypeError – If target is not a Tensor.

  • ValueError – If reduction is not one of “none”, “mean” or “sum”.

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = ms.Tensor([[1, 2, 3], [4, 5, 6]], mstype.float32)
>>> target = ms.Tensor([[6, 5, 4], [3, 2, 1]], mstype.float32)
>>> output = ops.l1_loss(x, target, reduction="mean")
>>> print(output)
3.0