mindflow.loss.RelativeRMSELoss

class mindflow.loss.RelativeRMSELoss(reduction='sum')[source]

Relative Root Mean Square Error (RRMSE) is the root mean squared error normalized by the root-mean-square value where each residual is scaled against the actual value. Relative RMSELoss creates a criterion to measure the root-mean-square error between \(x\) and \(y\) element-wise, where \(x\) is the prediction and \(y\) is the labels.

For simplicity, let \(x\) and \(y\) be 1-dimensional Tensor with length \(N\), the loss of \(x\) and \(y\) is given as:

\[loss = \sqrt{\frac{\sum_{i=1}^{N}{(x_i-y_i)^2}}{\sum_{i=1}^{N}{(y_i)^2}}}\]
Parameters

reduction (str) – Type of reduction to be applied to loss. The optional values are "mean", "sum", and "none". Default: "sum".

Inputs:
  • prediction (Tensor) - The prediction value of the network. Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • labels (Tensor) - True value of the samples. Tensor of shape \((N, *)\), where \(*\) means, any number of additional dimensions, same shape as the prediction in common cases. However, it supports the shape of labels is different from the shape of prediction and they should be broadcasted to each other.

Outputs:

Tensor, weighted loss.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindflow import RelativeRMSELoss
>>> # Case: prediction.shape = labels.shape = (3, 3)
>>> prediction = Tensor(np.array([[1, 2, 3],[1, 2, 3],[1, 2, 3]]), mindspore.float32)
>>> labels = Tensor(np.array([[1, 2, 2],[1, 2, 3],[1, 2, 3]]), mindspore.float32)
>>> loss_fn = RelativeRMSELoss()
>>> loss = loss_fn(prediction, labels)
>>> print(loss)
0.33333334