mindspore.nn.MultiClassDiceLoss

View Source On AtomGit
class mindspore.nn.MultiClassDiceLoss(weights=None, ignore_indiex=None, activation='softmax')[source]

When there are multiple classifications, label is transformed into multiple binary classifications by one hot. For each channel section in the channel, it can be regarded as a binary classification problem, so it can be obtained through the binary mindspore.nn.DiceLoss losses of each category, and then the average value of the binary losses.

Warning

This interface is deprecated and will be removed after version 2.9.0.

Parameters
  • weights (Union[Tensor, None]) – Tensor of shape \((num\_classes, dim)\). The weight shape[0] should be equal to labels shape[1]. Default: None .

  • ignore_indiex (Union[int, None]) – Class index to ignore. Default: None .

  • activation (Union[str, Cell]) – Activate function applied to the output of the fully connected layer, eg. 'ReLU'. Default: 'softmax' . Choose from: [ 'softmax' , 'logsoftmax' , 'relu' , 'relu6' , 'tanh' , 'Sigmoid' ]

Inputs:
  • logits (Tensor) - Tensor of shape \((N, C, *)\) where \(*\) means, any number of additional dimensions. The logits dimension should be greater than 1. The data type must be float16 or float32.

  • labels (Tensor) - Tensor of shape \((N, C, *)\), same shape as the logits. The labels dimension should be greater than 1. The data type must be float16 or float32.

Outputs:

Tensor, a tensor of shape with the per-example sampled MultiClass Dice Losses.

Raises
  • ValueError – If the shape of logits is different from labels.

  • TypeError – If the type of logits or labels is not a tensor.

  • ValueError – If the dimension of logits or labels is less than 2.

  • ValueError – If the weights.shape[0] is not equal to labels.shape[1].

  • ValueError – If weights is a tensor, but its dimension is not 2.

Supported Platforms:

Deprecated

Examples

>>> import mindspore
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> loss = nn.MultiClassDiceLoss(weights=None, ignore_indiex=None, activation="softmax")
>>> logits = Tensor(np.array([[0.2, 0.5, 0.7], [0.3, 0.1, 0.5], [0.9, 0.6, 0.3]]), mindspore.float32)
>>> labels = Tensor(np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
0.54958105