mindspore.nn.Dropout
- class mindspore.nn.Dropout(keep_prob=0.5, dtype=mstype.float32)[source]
- Dropout layer for the input. - Randomly set some elements of the input tensor to zero with probability \(1 - keep\_prob\) during training using samples from a Bernoulli distribution. - The outputs are scaled by a factor of \(\frac{1}{keep\_prob}\) during training so that the output layer remains at a similar scale. During inference, this layer returns the same tensor as the x. - This technique is proposed in paper Dropout: A Simple Way to Prevent Neural Networks from Overfitting and proved to be effective to reduce over-fitting and prevents neurons from co-adaptation. See more details in Improving neural networks by preventing co-adaptation of feature detectors. - Note - Each channel will be zeroed out independently on every construct call. - Parameters
- keep_prob (float) – The keep rate, greater than 0 and less equal than 1. E.g. rate=0.9, dropping out 10% of input units. Default: 0.5. 
- dtype ( - mindspore.dtype) – Data type of x. Default: mindspore.float32.
 
 - Inputs:
- x (Tensor) - The input of Dropout with data type of float16 or float32. The shape is \((N,*)\) where \(*\) means, any number of additional dimensions. 
 
- Outputs:
- Tensor, output tensor with the same shape as the x. 
 - Raises
- TypeError – If keep_prob is not a float. 
- TypeError – If dtype of x is not neither float16 nor float32. 
- ValueError – If keep_prob is not in range (0, 1]. 
- ValueError – If length of shape of x is less than 1. 
 
 - Supported Platforms:
- Ascend- GPU- CPU
 - Examples - >>> x = Tensor(np.ones([2, 2, 3]), mindspore.float32) >>> net = nn.Dropout(keep_prob=0.8) >>> net.set_train() Dropout<keep_prob=0.8> >>> output = net(x) >>> print(output.shape) (2, 2, 3)