mindspore.nn.probability.bijector.Softplus
- class mindspore.nn.probability.bijector.Softplus(sharpness=1.0, name='Softplus')[source]
Softplus Bijector. This Bijector performs the operation:
\[Y = \frac{\log(1 + e ^ {kX})}{k}\]where k is the sharpness factor.
- Parameters
sharpness (float, list, numpy.ndarray, Tensor) – The scale factor. Default: 1.0.
name (str) – The name of the Bijector. Default: ‘Softplus’.
- Inputs and Outputs of APIs:
The accessible APIs of the Softplus bijector is defined in the base class, including:
forward
inverse
forward_log_jacobian
backward_log_jacobian
It should be notice that the inputs of APIs of APIs of the Softplus bijector should be always a tensor, with a shape that can be broadcasted to that of sharpness. For more details of all APIs, including the inputs and outputs of APIs of the Softplus bijector, please refer to
mindspore.nn.probability.bijector.Bijector, and examples below.- Supported Platforms:
AscendGPU
Note
The dtype of sharpness must be float.
- Raises
TypeError – When the dtype of the sharpness is not float.
Examples
>>> import mindspore >>> import mindspore.nn as nn >>> import mindspore.nn.probability.bijector as msb >>> from mindspore import Tensor >>> >>> # To initialize a Softplus bijector of sharpness 2.0. >>> softplus = msb.Softplus(2.0) >>> # To use a ScalarAffine bijector in a network. >>> value = Tensor([1, 2, 3], dtype=mindspore.float32) >>> ans1 = softplus.forward(value) >>> print(ans1.shape) (3,) >>> ans2 = softplus.inverse(value) >>> print(ans2.shape) (3,) >>> ans3 = softplus.forward_log_jacobian(value) >>> print(ans3.shape) (3,) >>> ans4 = softplus.inverse_log_jacobian(value) >>> print(ans4.shape) (3,)
- property sharpness
Return the sharpness parameter of the bijector.
- Output:
Tensor, the sharpness parameter of the bijector.