mindspore.nn.probability.distribution.Uniform

class mindspore.nn.probability.distribution.Uniform(low=None, high=None, seed=None, dtype=mstype.float32, name='Uniform')[source]

Uniform Distribution. A Uniform distributio is a continuous distribution with the range \([a, b]\) and the probability density function:

\[f(x, a, b) = 1 / b \exp(\exp(-(x - a) / b) - x),\]

where a and b are the lower and upper bound respectively.

Parameters
  • low (int, float, list, numpy.ndarray, Tensor) – The lower bound of the distribution. Default: None.

  • high (int, float, list, numpy.ndarray, Tensor) – The upper bound of the distribution. Default: None.

  • seed (int) – The seed uses in sampling. The global seed is used if it is None. Default: None.

  • dtype (mindspore.dtype) – The type of the event samples. Default: mstype.float32.

  • name (str) – The name of the distribution. Default: ‘Uniform’.

Inputs and Outputs of APIs:

The accessible APIs of the Uniform distribution are defined in the base class, including:

  • prob, log_prob, cdf, log_cdf, survival_function, and log_survival

  • mean, sd, var, and entropy

  • kl_loss and cross_entropy

  • sample

For more details of all APIs, including the inputs and outputs of all APIs of the Uniform distribution , please refer to mindspore.nn.probability.distribution.Distribution, and examples below.

Supported Platforms:

Ascend GPU

Note

low must be strictly less than high. dist_spec_args are high and low. dtype must be float type because Uniform distributions are continuous.

Raises
  • ValueError – When high <= low.

  • TypeError – When the input dtype is not a subclass of float.

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import mindspore.nn.probability.distribution as msd
>>> from mindspore import Tensor
>>> # To initialize a Uniform distribution of the lower bound 0.0 and the higher bound 1.0.
>>> u1 = msd.Uniform(0.0, 1.0, dtype=mindspore.float32)
>>> # A Uniform distribution can be initialized without arguments.
>>> # In this case, `high` and `low` must be passed in through arguments during function calls.
>>> u2 = msd.Uniform(dtype=mindspore.float32)
>>>
>>> # Here are some tensors used below for testing
>>> value = Tensor([0.5, 0.8], dtype=mindspore.float32)
>>> low_a = Tensor([0., 0.], dtype=mindspore.float32)
>>> high_a = Tensor([2.0, 4.0], dtype=mindspore.float32)
>>> low_b = Tensor([-1.5], dtype=mindspore.float32)
>>> high_b = Tensor([2.5, 5.], dtype=mindspore.float32)
>>> # Private interfaces of probability functions corresponding to public interfaces, including
>>> # `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`, have the same arguments.
>>> # Args:
>>> #     value (Tensor): the value to be evaluated.
>>> #     low (Tensor): the lower bound of the distribution. Default: self.low.
>>> #     high (Tensor): the higher bound of the distribution. Default: self.high.
>>> # Examples of `prob`.
>>> # Similar calls can be made to other probability functions
>>> # by replacing 'prob' by the name of the function.
>>> ans = u1.prob(value)
>>> print(ans.shape)
(2,)
>>> # Evaluate with respect to distribution b.
>>> ans = u1.prob(value, low_b, high_b)
>>> print(ans.shape)
(2,)
>>> # `high` and `low` must be passed in during function calls.
>>> ans = u2.prob(value, low_a, high_a)
>>> print(ans.shape)
(2,)
>>> # Functions `mean`, `sd`, `var`, and `entropy` have the same arguments.
>>> # Args:
>>> #     low (Tensor): the lower bound of the distribution. Default: self.low.
>>> #     high (Tensor): the higher bound of the distribution. Default: self.high.
>>> # Examples of `mean`. `sd`, `var`, and `entropy` are similar.
>>> ans = u1.mean() # return 0.5
>>> print(ans.shape)
()
>>> ans = u1.mean(low_b, high_b) # return (low_b + high_b) / 2
>>> print(ans.shape)
(2,)
>>> # `high` and `low` must be passed in during function calls.
>>> ans = u2.mean(low_a, high_a)
>>> print(ans.shape)
(2,)
>>> # Interfaces of 'kl_loss' and 'cross_entropy' are the same.
>>> # Args:
>>> #     dist (str): the type of the distributions. Should be "Uniform" in this case.
>>> #     low_b (Tensor): the lower bound of distribution b.
>>> #     high_b (Tensor): the upper bound of distribution b.
>>> #     low_a (Tensor): the lower bound of distribution a. Default: self.low.
>>> #     high_a (Tensor): the upper bound of distribution a. Default: self.high.
>>> # Examples of `kl_loss`. `cross_entropy` is similar.
>>> ans = u1.kl_loss('Uniform', low_b, high_b)
>>> print(ans.shape)
(2,)
>>> ans = u1.kl_loss('Uniform', low_b, high_b, low_a, high_a)
>>> print(ans.shape)
(2,)
>>> # Additional `high` and `low` must be passed in.
>>> ans = u2.kl_loss('Uniform', low_b, high_b, low_a, high_a)
>>> print(ans.shape)
(2,)
>>> # Examples of `sample`.
>>> # Args:
>>> #     shape (tuple): the shape of the sample. Default: ()
>>> #     low (Tensor): the lower bound of the distribution. Default: self.low.
>>> #     high (Tensor): the upper bound of the distribution. Default: self.high.
>>> ans = u1.sample()
>>> print(ans.shape)
()
>>> ans = u1.sample((2,3))
>>> print(ans.shape)
(2, 3)
>>> ans = u1.sample((2,3), low_b, high_b)
>>> print(ans.shape)
(2, 3, 2)
>>> ans = u2.sample((2,3), low_a, high_a)
>>> print(ans.shape)
(2, 3, 2)
extend_repr()[source]

Display instance object as string.

property high

Return the upper bound of the distribution after casting to dtype.

Output:

Tensor, the upper bound of the distribution.

property low

Return the lower bound of the distribution after casting to dtype.

Output:

Tensor, the lower bound of the distribution.