mindspore.mint.nn.functional.dropout2d

View Source On Gitee
mindspore.mint.nn.functional.dropout2d(input, p=0.5, training=True, inplace=False)[source]

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution (For a 4-dimensional tensor with a shape of \((N, C, H, W)\), the channel feature map refers to a 2-dimensional feature map with the shape of \((H, W)\)).

For example, the \(j\_th\) channel of the \(i\_th\) sample in the batched input is a to-be-processed 2D tensor input[i,j]. Each channel will be zeroed out independently on every forward call which based on Bernoulli distribution probability p. The parper Dropout: A Simple Way to Prevent Neural Networks from Overfitting mentioned this technology, and it is proved that it can effectively reduce over fitting and prevent neuronal coadaptation. For more details, refer to Improving neural networks by preventing co-adaptation of feature detectors .

dropout2d can improve the independence between channel feature maps.

Warning

This is an experimental API that is subject to change or deletion.

Parameters
  • input (Tensor) – A 4D tensor with shape \((N, C, H, W)\), where N is the batch size, C is the number of channels, H is the feature height, and W is the feature width.

  • p (float, optional) – The dropping probability of a channel, between 0 and 1, e.g. p = 0.8, which means dropping out 80% of channels. Default: 0.5 .

  • training (bool, optional) – If training is True, applying dropout, otherwise, not applying. Default: True .

  • inplace (bool, optional) – If set to True , will do this operation in-place. Default: False .

Returns

Tensor, output, with the same shape and data type as input.

Raises
Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, mint
>>> input = Tensor(np.ones([2, 1, 2, 3]), mindspore.float32)
>>> output = mint.nn.functional.dropout2d(input, 0.5)
>>> print(output.shape)
(2, 1, 2, 3)