mindspore.nn.Dropout1d

View Source On Gitee
class mindspore.nn.Dropout1d(p=0.5)[source]

During training, randomly zeroes entire channels of the input tensor with probability p from a Bernoulli distribution (For a 3-dimensional tensor with a shape of \((N, C, L)\), the channel feature map refers to a 1-dimensional feature map with the shape of \(L\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 1D tensor input[i,j]. Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

The paper Dropout: A Simple Way to Prevent Neural Networks from Overfitting mentioned this technology, And it is proved that it can effectively reduce over fitting and prevent neuronal coadaptation. For more details, refer to Improving neural networks by preventing co-adaptation of feature detectors .

Dropout1d can improve the independence between channel feature maps.

Parameters

p (float, optional) – The dropping probability of a channel, between 0 and 1, e.g. p = 0.8, which means an 80% chance of being set to 0. Default: 0.5 .

Inputs:
  • x (Tensor) - A tensor with shape \((N, C, L)\) or \((C, L)\), where N is the batch size, C is the number of channels, L is the feature length. The data type must be int8, int16, int32, int64, float16, float32 or float64.

Outputs:

Tensor, has the same shape and data type as x.

Raises
Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> op = ms.nn.Dropout1d(p=0.6)
>>> op.training = True
>>> a = ms.Tensor(np.ones((3, 3)), ms.float32)
>>> output = op(a)