比较与tf.math.reduce_std的功能差异

tf.math.reduce_std

tf.math.reduce_std(input_tensor, axis=None, keepdims=False, name=None)

更多内容详见tf.math.reduce_std

mindspore.Tensor.std

mindspore.Tensor.std(self, axis=None, ddof=0, keepdims=False)

更多内容详见mindspore.Tensor.std

使用方式

两接口基本功能相同,都是计算某个维度上Tensor的标准差,计算公式为:std = sqrt(mean(x)),其中x = abs(a - a.mean())**2。

不同点在于,mindspore.Tensor.std多一个入参ddof。一般情况下,均值为x.sum() / N,其中N=len(x),如果ddof被配置,分母将由N变为N-ddof。

代码示例

import mindspore as ms
import numpy as np

a = ms.Tensor(np.array([[1, 2], [3, 4]]), ms.float32)
print(a.std()) # 1.118034
print(a.std(axis=0)) # [1. 1.]
print(a.std(axis=1)) # [0.5 0.5]
print(a.std(ddof=1)) # 1.2909944

import tensorflow as tf
tf.enable_eager_execution()

x = tf.constant([[1., 2.], [3., 4.]])
print(tf.math.reduce_std(x).numpy())  # 1.118034
print(tf.math.reduce_std(x, 0).numpy())  # [1., 1.]
print(tf.math.reduce_std(x, 1).numpy())  # [0.5,  0.5]