# mindspore.ops.SparseSoftmaxCrossEntropyWithLogits

class mindspore.ops.SparseSoftmaxCrossEntropyWithLogits(is_grad=False)[source]

Computes the softmax cross-entropy value between logits and sparse encoding labels.

Sets input logits as X, input label as Y, output as loss. Then,

$\begin{split}\begin{array}{ll} \\ p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)} \\ loss_{ij} = \begin{cases} -ln(p_{ij}), &j = y_i \cr -ln(1 - p_{ij}), & j \neq y_i \end{cases} \\ loss = \sum_{ij} loss_{ij} \end{array}\end{split}$
Parameters

is_grad (bool) – If true, this operation returns the computed gradient. Default: False.

Inputs:
• logits (Tensor) - Input logits, with shape $$(N, C)$$. Data type must be float16 or float32.

• labels (Tensor) - Ground truth labels, with shape $$(N)$$. Data type must be int32 or int64.

Outputs:

Tensor, if is_grad is False, the output tensor is the value of loss which is a scalar tensor; if is_grad is True, the output tensor is the gradient of input with the same shape as logits.

Raises
• TypeError – If is_grad is not a bool.

• TypeError – If dtype of logits is neither float16 nor float32.

• TypeError – If dtype of labels is neither int32 nor int64.

• ValueError – If logits.shape[0] != labels.shape[0].

Supported Platforms:

GPU CPU

Examples

>>> logits = Tensor([[2, 3, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32)
>>> labels = Tensor([0, 1], mindspore.int32)
>>> sparse_softmax_cross = ops.SparseSoftmaxCrossEntropyWithLogits()
>>> loss = sparse_softmax_cross(logits, labels)
>>> print(loss)
3.4878292
>>> sparse_softmax_cross_grad = ops.SparseSoftmaxCrossEntropyWithLogits(is_grad=True)
>>> loss_grad = sparse_softmax_cross_grad(logits, labels)
>>> print(loss_grad)
[[-0.48415753  0.04306427  0.00582811  0.11706084  0.3182043 ]
[ 0.04007946 -0.4852556   0.04007946  0.2961494   0.10894729]]