mindspore.nn.BatchNorm1d

View Source On Gitee
class mindspore.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.9, affine=True, gamma_init='ones', beta_init='zeros', moving_mean_init='zeros', moving_var_init='ones', use_batch_statistics=None, data_format='NCHW', dtype=mstype.float32)[source]

This layer applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D or 2D inputs) to reduce internal covariate shift. Batch Normalization is widely used in convolutional networks. For the setailed contents, refer to Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

Note

The implementation of BatchNorm is different in graph mode and pynative mode, therefore the mode is not recommended to be changed after net was initialized.

Parameters
  • num_features (int) – number of features or channels C of the input x .

  • eps (float) – \(\epsilon\) added to the denominator for numerical stability. Default: 1e-5 .

  • momentum (float) – A floating hyperparameter of the momentum for the running_mean and running_var computation. Default: 0.9 .

  • affine (bool) – A bool value. When set to True , \(\gamma\) and \(\beta\) can be learned. Default: True .

  • gamma_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the \(\gamma\) weight. The values of str refer to the function mindspore.common.initializer including 'zeros' , 'ones' , etc. Default: 'ones' .

  • beta_init (Union[Tensor, str, Initializer, numbers.Number]) –

    Initializer for the \(\beta\) weight. The values of str refer to the function mindspore.common.initializer including 'zeros' , 'ones', etc. Default: 'zeros' .

  • moving_mean_init (Union[Tensor, str, Initializer, numbers.Number]) –

    Initializer for the moving mean. The values of str refer to the function mindspore.common.initializer including 'zeros' , 'ones' , etc. Default: 'zeros' .

  • moving_var_init (Union[Tensor, str, Initializer, numbers.Number]) –

    Initializer for the moving variance. The values of str refer to the function mindspore.common.initializer including 'zeros' , 'ones' , etc. Default: 'ones' .

  • use_batch_statistics (bool) – If true , use the mean value and variance value of current batch data. If false , use the mean value and variance value of specified value. If None , the training process will use the mean and variance of current batch data and track the running mean and variance, the evaluation process will use the running mean and variance. Default: None .

  • data_format (str) – The optional value for data format, is 'NHWC' or 'NCHW' . Default: 'NCHW' .

  • dtype (mindspore.dtype) – Dtype of Parameters. Default: mstype.float32 .

Inputs:
  • x (Tensor) - Tensor of shape \((N, C)\) or \((N, C, L)\) , where N is the batch size, C is the number of features or channels, and L is the sequence length. Supported types: float16, float32.

Outputs:

Tensor, the normalized, scaled, offset tensor, of shape \((N, C)\) or \((N, C, L)\) .

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> net = ms.nn.BatchNorm1d(num_features=4)
>>> x = ms.Tensor(np.array([[0.7, 0.5, 0.5, 0.6],
...                         [0.5, 0.4, 0.6, 0.9]]).astype(np.float32))
>>> output = net(x)
>>> print(output)
[[ 0.6999965   0.4999975  0.4999975  0.59999704 ]
 [ 0.4999975   0.399998   0.59999704 0.89999545 ]]