mindspore.ops.MultilabelMarginLoss

class mindspore.ops.MultilabelMarginLoss(reduction='mean')[source]

Creates a loss criterion that minimizes the hinge loss for multi-class classification tasks. It takes a 2D mini-batch Tensor \(x\) as input and a 2D Tensor \(y\) containing target class indices as output.

Refer to mindspore.ops.multilabel_margin_loss() for more details.

Parameters

reduction (str, optional) –

Apply specific reduction method to the output: ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

  • ’none’: no reduction will be applied.

  • ’mean’: the sum of the output will be divided by the number of elements in the output.

  • ’sum’: the output will be summed.

Inputs:
  • x (Tensor) - Predict data. Tensor of shape \((C)\) or \((N, C)\), where \(N\) is the batch size and \(C\) is the number of classes. Data type must be float16 or float32.

  • target (Tensor) - Ground truth data, with the same shape as input, data type must be int32 and label targets padded by -1.

Outputs:
  • y (Union[Tensor, Scalar]) - The loss of MultilabelMarginLoss. If reduction is “none”, its shape is \((N)\). Otherwise, a scalar value will be returned.

  • is_target (Tensor) - Output tensor for backward input, with the same shape as target, data type must be int32.

Supported Platforms:

Ascend GPU

Examples

>>> loss = ops.MultilabelMarginLoss()
>>> x = Tensor(np.array([[0.1, 0.2, 0.4, 0.8], [0.2, 0.3, 0.5, 0.7]]), mindspore.float32)
>>> target = Tensor(np.array([[1, 2, 0, 3], [2, 3, -1, 1]]), mindspore.int32)
>>> output = loss(x, target)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 0.325), Tensor(shape=[2, 4], dtype=Int32, value=
[[1, 1, 1, 1], [0, 0, 1, 1]]))