mindspore.train.Perplexity
- class mindspore.train.Perplexity(ignore_label=None)[source]
Computes perplexity. Perplexity is a measurement about how well a probability distribution or a model predicts a sample. A low perplexity indicates the model can predict the sample well. The function is shown as follows:
\[PP(W)=P(w_{1}w_{2}...w_{N})^{-\frac{1}{N}}=\sqrt[N]{\frac{1}{P(w_{1}w_{2}...w_{N})}}\]Where \(w\) represents words in a corpus. The root sign is the reciprocal of the probability of a sentence, and the better the sentence (with a higher probability), the lower the perplexity.
- Parameters
ignore_label (Union[int, None], optional) – Index of an invalid label to be ignored when counting. If set to None, it will include all entries. Default:
None.
- Supported Platforms:
AscendGPUCPU
Examples
>>> import numpy as np >>> from mindspore import Tensor >>> from mindspore.train import Perplexity >>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]])) >>> y = Tensor(np.array([1, 0, 1])) >>> metric = Perplexity(ignore_label=None) >>> metric.clear() >>> metric.update(x, y) >>> perplexity = metric.eval() >>> print(perplexity) 2.231443166940565
- eval()[source]
Returns the current evaluation result.
- Returns
numpy.float64. The computed result.
- Raises
RuntimeError – If the sample size is 0.
- update(*inputs)[source]
Updates the internal evaluation result preds and labels.
- Parameters
inputs – Input preds and labels. preds and labels support Tensor, list or numpy.ndarray. preds is the predicted value, labels is the label of the data. The shape of preds and labels are both \((N, C)\).
- Raises
ValueError – If the number of the inputs is not 2.
RuntimeError – If preds and labels have different lengths.
RuntimeError – If labels shape is not equal to preds shape.