mindspore.nn.probability.distribution.LogNormal

class mindspore.nn.probability.distribution.LogNormal(loc=None, scale=None, seed=0, dtype=mindspore.float32, name='LogNormal')[source]

LogNormal distribution. A log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. It is constructed as the exponential transformation of a Normal distribution.

Parameters
  • loc (int, float, list, numpy.ndarray, Tensor) – The mean of the underlying Normal distribution.

  • scale (int, float, list, numpy.ndarray, Tensor) – The standard deviation of the underlying Normal distribution.

  • seed (int) – the seed used in sampling. The global seed is used if it is None. Default: None.

  • dtype (mindspore.dtype) – type of the distribution. Default: mstype.float32.

  • name (str) – the name of the distribution. Default: ‘LogNormal’.

Supported Platforms:

Ascend GPU

Note

scale must be greater than zero. dist_spec_args are loc and scale. dtype must be a float type because LogNormal distributions are continuous.

Examples

>>> import mindspore
>>> import mindspore.context as context
>>> import mindspore.nn as nn
>>> import mindspore.nn.probability.distribution as msd
>>> from mindspore import Tensor
>>> context.set_context(mode=1)
>>> # To initialize a LogNormal distribution of `loc` 3.0 and `scale` 4.0.
>>> n1 = msd.LogNormal(3.0, 4.0, dtype=mindspore.float32)
>>> # A LogNormal distribution can be initialized without arguments.
>>> # In this case, `loc` and `scale` must be passed in during function calls.
>>> n2 = msd.LogNormal(dtype=mindspore.float32)
>>>
>>> # Here are some tensors used below for testing
>>> value = Tensor([1.0, 2.0, 3.0], dtype=mindspore.float32)
>>> loc_a = Tensor([2.0], dtype=mindspore.float32)
>>> scale_a = Tensor([2.0, 2.0, 2.0], dtype=mindspore.float32)
>>> loc_b = Tensor([1.0], dtype=mindspore.float32)
>>> scale_b = Tensor([1.0, 1.5, 2.0], dtype=mindspore.float32)
>>>
>>> # Private interfaces of probability functions corresponding to public interfaces, including
>>> # `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`, have the same
>>> # arguments as follows.
>>> # Args:
>>> #     value (Tensor): the value to be evaluated.
>>> #     loc (Tensor): the loc of distribution. Default: None. If `loc` is passed in as None,
>>> #         the mean of the underlying Normal distribution will be used.
>>> #     scale (Tensor): the scale of distribution. Default: None. If `scale` is passed in as None,
>>> #         the standard deviation of the underlying Normal distribution will be used.
>>> # Examples of `prob`.
>>> # Similar calls can be made to other probability functions
>>> # by replacing 'prob' by the name of the function.
>>> ans = n1.prob(value)
>>> print(ans.shape)
(3,)
>>> # Evaluate with respect to distribution b.
>>> ans = n1.prob(value, loc_b, scale_b)
>>> print(ans.shape)
(3,)
>>> # `loc` and `scale` must be passed in during function calls since they were not passed in construct.
>>> ans = n2.prob(value, loc_a, scale_a)
>>> print(ans.shape)
(3,)
>>> # Functions `mean`, `sd`, `var`, and `entropy` have the same arguments.
>>> # Args:
>>> #     loc (Tensor): the loc of distribution. Default: None. If `loc` is passed in as None,
>>> #         the mean of the underlying Normal distribution will be used.
>>> #     scale (Tensor): the scale of distribution. Default: None. If `scale` is passed in as None,
>>> #         the standard deviation of the underlying Normal distribution will be used.
>>> # Example of `mean`. `sd`, `var`, and `entropy` are similar.
>>> ans = n1.mean()
>>> print(ans.shape)
()
>>> ans = n1.mean(loc_b, scale_b)
>>> print(ans.shape)
(3,)
>>> # `loc` and `scale` must be passed in during function calls since they were not passed in construct.
>>> ans = n2.mean(loc_a, scale_a)
>>> print(ans.shape)
(3,)
>>> # Interfaces of 'kl_loss' and 'cross_entropy' are the same:
>>> # Args:
>>> #     dist (str): the type of the distributions. Only "Normal" is supported.
>>> #     loc_b (Tensor): the loc of distribution b.
>>> #     scale_b (Tensor): the scale distribution b.
>>> #     loc_a (Tensor): the loc of distribution a. Default: None. If `loc` is passed in as None,
>>> #         the mean of the underlying Normal distribution will be used.
>>> #     scale_a (Tensor): the scale distribution a. Default: None. If `scale` is passed in as None,
>>> #         the standard deviation of the underlying Normal distribution will be used.
>>> # Examples of `kl_loss`. `cross_entropy` is similar.
>>> ans = n1.kl_loss('LogNormal', loc_b, scale_b)
>>> print(ans.shape)
(3,)
>>> ans = n1.kl_loss('LogNormal', loc_b, scale_b, loc_a, scale_a)
>>> print(ans.shape)
(3,)
>>> # Additional `loc` and `scale` must be passed in since they were not passed in construct.
>>> ans = n2.kl_loss('LogNormal', loc_b, scale_b, loc_a, scale_a)
>>> print(ans.shape)
(3,)
>>> # Examples of `sample`.
>>> # Args:
>>> #     shape (tuple): the shape of the sample. Default: ()
>>> #     loc (Tensor): the loc of the distribution. Default: None. If `loc` is passed in as None,
>>> #         the mean of the underlying Normal distribution will be used.
>>> #     scale (Tensor): the scale of the distribution. Default: None. If `scale` is passed in as None,
>>> #         the standard deviation of the underlying Normal distribution will be used.
>>> ans = n1.sample()
>>> print(ans.shape)
()
>>> ans = n1.sample((2,3))
>>> print(ans.shape)
(2, 3)
>>> ans = n1.sample((2,3), loc_b, scale_b)
>>> print(ans.shape)
(2, 3, 3)
>>> ans = n2.sample((2,3), loc_a, scale_a)
>>> print(ans.shape)
(2, 3, 3)
property loc

Distribution parameter for the pre-transformed mean after casting to dtype.

property scale

Distribution parameter for the pre-transformed standard deviation after casting to dtype.