mindspore.nn.Perplexity

class mindspore.nn.Perplexity(ignore_label=None)[source]

Computes perplexity. Perplexity is a measurement about how well a probability distribution or a model predicts a sample. A low perplexity indicates the model can predict the sample well. The function is shown as follows:

\[\begin{split}b^{\\big(-\\frac{1}{N} \\sum_{i=1}^N \\log_b q(x_i) \\big)} = \\exp \\big(-\\frac{1}{N} \\sum_{i=1}^N \\log q(x_i)\\big)\end{split}\]
Parameters

ignore_label (int) – Index of an invalid label to be ignored when counting. If set to None, it will include all entries. Default: -1.

Examples

>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = Perplexity(ignore_label=None)
>>> metric.clear()
>>> metric.update(x, y)
>>> perplexity = metric.eval()
>>> print(perplexity)
2.231443166940565
clear()[source]

Clears the internal evaluation result.

eval()[source]

Returns the current evaluation result.

Returns

float, the computed result.

Raises

RuntimeError – If the sample size is 0.

update(*inputs)[source]

Updates the internal evaluation result: math:preds and :math:labels.

Parameters

inputs – Input preds and labels. preds and labels are Tensor, list or numpy.ndarray. preds is the predicted values, labels is the label of the data. The shape of preds and labels are both \((N, C)\).

Raises

ValueError – If the number of the inputs is not 2.