mindspore.ops.SmoothL1Loss

class mindspore.ops.SmoothL1Loss(beta=1.0)[源代码]

Computes smooth L1 loss, a robust L1 loss.

SmoothL1Loss is a Loss similar to MSELoss but less sensitive to outliers as described in the Fast R-CNN by Ross Girshick.

Given two input \(x,\ y\) of length \(N\), the unreduced SmoothL1Loss can be described as follows:

\[\begin{split}L_{i} = \begin{cases} \frac{0.5 (x_i - y_i)^{2}}{\text{beta}}, & \text{if } |x_i - y_i| < \text{beta} \\ |x_i - y_i| - 0.5 \text{beta}, & \text{otherwise. } \end{cases}\end{split}\]

Here \(\text{beta}\) controls the point where the loss function changes from quadratic to linear. Its default value is 1.0. \(N\) is the batch size. This function returns an unreduced loss Tensor.

Warning

This operator does not perform the “reduce” operation on the loss value. Call other reduce operators to perform “reduce” operation on the loss if required.

Parameters

beta (float) – A parameter used to control the point where the function will change from quadratic to linear. Default: 1.0.

Inputs:
  • logits (Tensor) - Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions. Data type must be float16 or float32.

  • labels (Tensor) - Ground truth data, tensor of shape \((N, *)\), same shape and dtype as the logits.

Outputs:

Tensor, loss float tensor, same shape and dtype as the logits.

Raises
  • TypeError – If beta is not a float.

  • TypeError – If dtype of logits or labels is neither float16 nor float32.

  • ValueError – If beta is less than or equal to 0.

  • ValueError – If shape of logits is not the same as labels.

Supported Platforms:

Ascend GPU CPU

Examples

>>> loss = ops.SmoothL1Loss()
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
[0.  0.  0.5]