# mindspore.common.initializer

Initializer for cell parameters.

class mindspore.common.initializer.Constant(value)[source]

Generates an array with constant value in order to initialize a tensor.

Parameters

value (Union[int, numpy.ndarray]) – The value to initialize.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer
>>> tensor1 = initializer(0, [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer(5, [1, 2, 3], mindspore.float32)

class mindspore.common.initializer.Dirac(groups=1)[source]

Initialize input tensor with the Dirac delta function. It tries to preserves the identity of input for convolution layers. For group convolution, each group of channels will be preserved respectively.

Parameters

groups (int) – The number of group in convolution layer. Default: 1.

Raises
• ValueError – If the value of group is not in [3, 4, 5].

• ValueError – The first dimension of the initialized tensor cannot be divisible by group.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Dirac
>>> tensor1 = initializer(Dirac(groups=2), [6, 4, 3, 3], mindspore.float32)
>>> tensor2 = initializer("dirac", [6, 4, 3, 3], mindspore.float32)

class mindspore.common.initializer.HeNormal(negative_slope=0, mode='fan_in', nonlinearity='leaky_relu')[source]

Generates an array with values sampled from HeKaiming Normal distribution $${N}(0, \text{sigma}^2)$$ in order to initialize a tensor, where

$sigma = \frac{gain} {\sqrt{N}}$

where $$gain$$ is an optional scaling factor. $$N$$ is the number of input units of the weight tensor, if mode is ‘fan_in’. If mode is ‘fan_out’, it is the number of output units.

For details of HeUniform algorithm, please check https://arxiv.org/abs/1502.01852.

Parameters
• negative_slope (int, float, bool) – The negative slope of the rectifier used after this layer (only used when nonlinearity is ‘leaky_relu’). Default: 0.

• mode (str) – Either ‘fan_in’ or ‘fan_out’. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass. Default: fan_in.

• nonlinearity (str) – The non-linear function, recommended to use only with ‘relu’ or ‘leaky_relu’. Default: leaky_relu.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, HeNormal
>>> tensor1 = initializer(HeNormal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('he_normal', [1, 2, 3], mindspore.float32)

class mindspore.common.initializer.HeUniform(negative_slope=0, mode='fan_in', nonlinearity='leaky_relu')[source]

Generates an array with values sampled from HeKaiming Uniform distribution $${U}(-\text{boundary}, \text{boundary})$$ in order to initialize a tensor, where

$boundary = \sqrt{\frac{6}{(1 + a^2) \times \text{fan_in}}}$

which is the bound of the HeUniform distribution.

For details of HeUniform algorithm, please check https://arxiv.org/abs/1502.01852.

Parameters
• negative_slope (int, float, bool) – The negative slope of the rectifier used after this layer (only used when nonlinearity is ‘leaky_relu’). Default: 0.

• mode (str) – Either ‘fan_in’ or ‘fan_out’. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass. Default: fan_in.

• nonlinearity (str) – The non-linear function, recommended to use only with ‘relu’ or ‘leaky_relu’. Default: leaky_relu.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, HeUniform
>>> tensor1 = initializer(HeUniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('he_uniform', [1, 2, 3], mindspore.float32)

class mindspore.common.initializer.Identity(**kwargs)[source]

Initialize a 2 dimension identity matrix to fill the input tensor.

Raises

ValueError – If the dimension of input tensor is not equal to 2.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Identity
>>> tensor1 = initializer(Identity(), [2, 3], mindspore.float32)
>>> tensor2 = initializer('identity', [2, 3], mindspore.float32)

class mindspore.common.initializer.Initializer(**kwargs)[source]

The abstract base class of the initializer.

Parameters

kwargs (dict) – Keyword arguments for Initializer.

class mindspore.common.initializer.Normal(sigma=0.01, mean=0.0)[source]

Generates an array with values sampled from Normal distribution $${N}(\text{sigma}, \text{mean})$$ in order to initialize a tensor.

$f(x) = \frac{1} {\sqrt{2*π} * sigma}exp(-\frac{(x - mean)^2} {2*{sigma}^2})$
Parameters
• sigma (float) – The standard deviation of Normal distribution. Default: 0.01.

• mean (float) – The mean of Normal distribution. Default: 0.0.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Normal
>>> tensor1 = initializer(Normal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('normal', [1, 2, 3], mindspore.float32)

class mindspore.common.initializer.One(**kwargs)[source]

Generates an array with constant value of one in order to initialize a tensor.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, One
>>> tensor1 = initializer(One(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('ones', [1, 2, 3], mindspore.float32)

class mindspore.common.initializer.Orthogonal(gain=1.0)[source]

Initialize a (semi) orthogonal matrix to fill the input tensor. The dimension of input tensor must have at least 2 dimensions. If the dimension is greater than 2, the trailing dimensions will be flattened.

Parameters

gain (float) – An optional scaling factor. Default: 1.

Raises

ValueError – If the dimension of input tensor is less than 2.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Orthogonal
>>> tensor1 = initializer(Orthogonal(gain=2.), [2, 3, 4], mindspore.float32)
>>> tensor2 = initializer('orthogonal', [2, 3, 4], mindspore.float32)

class mindspore.common.initializer.Sparse(sparsity, sigma=0.01)[source]

Initialize a 2 dimension sparse matrix to fill the input tensor. The non-zero positions will be filled with the value sampled from the normal distribution $${N}(0, 0.01)$$

Parameters
• sparsity (float) – The fraction of elements being set to zero in each column.

• sigma (float) – The standard deviation of the normal distribution. Default: 0.01.

Raises

ValueError – If the dimension of input tensor is not equal to 2.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Sparse
>>> tensor1 = initializer(Sparse(sparsity=0.1, sigma=0.01), [5, 8], mindspore.float32)

class mindspore.common.initializer.TruncatedNormal(sigma=0.01)[source]

Generates an array with values sampled from Truncated Normal distribution in order to initialize a tensor.

Parameters

sigma (float) – The standard deviation of Truncated Normal distribution. Default: 0.01.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, TruncatedNormal
>>> tensor1 = initializer(TruncatedNormal(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('truncatedNormal', [1, 2, 3], mindspore.float32)

class mindspore.common.initializer.Uniform(scale=0.07)[source]

Generates an array with values sampled from Uniform distribution $${U}(-\text{scale}, \text{scale})$$ in order to initialize a tensor.

Parameters

scale (float) – The bound of the Uniform distribution. Default: 0.07.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Uniform
>>> tensor1 = initializer(Uniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('uniform', [1, 2, 3], mindspore.float32)

class mindspore.common.initializer.VarianceScaling(scale=1.0, mode='fan_in', distribution='truncated_normal')[source]

Randomly initialize an array with scaling to fill the input tensor. When distribution is truncated_normal or untruncated_normal, the value will be sampled from truncated or untruncated normal distribution with a mean of 0 and a scaled standard deviation $$stddev = sqrt(scale/n)$$. $$n$$ will be the number of input units if mode is fan_in, the number of output units if mode is fan_out, the average of fan_in and fan_out if mode is fan_avg. When distribution is uniform, the value will be sampled from a uniform distribution within the limit of [-sqrt(3*scale/n), sqrt(3*scale/n)].

Parameters
• scale (float) – The scaling factor. Default: 1.0.

• mode (str) – Should be ‘fan_in’, ‘fan_out’ or ‘fan_avg’. Default: ‘fan_in’.

• distribution (str) – The type of distribution chose to sample values. Default: ‘truncated_normal’.

Raises
• ValueError – If scale is not greater than 0.

• ValueError – If mode is not fan_in, fan_out or fan_avg.

• ValueError – If distribution is not uniform, truncated_normal or untruncated_normal.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, VarianceScaling
>>> tensor1 = initializer(VarianceScaling(scale=1.0, mode='fan_out',
...                                       distribution='untruncated_normal'), [2, 3], mindspore.float32)
>>> tensor2 = initializer('varianceScaling', [2, 3], mindspore.float32)

class mindspore.common.initializer.XavierUniform(gain=1)[source]

Generates an array with values sampled from Xavier uniform distribution $${U}(-\text{boundary}, \text{boundary})$$ in order to initialize a tensor, where:

$boundary = gain * \sqrt{\frac{6}{n_{in} + n_{out}}}$
• where $$gain$$ is an optional scaling factor.

• where $$n_{in}$$ is the number of input units in the weight tensor.

• where $$n_{out}$$ is the number of output units in the weight tensor.

For details of XavierUniform algorithm, please check http://proceedings.mlr.press/v9/glorot10a.html.

Parameters

gain (float) – An optional scaling factor. Default: 1.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, XavierUniform
>>> tensor1 = initializer(XavierUniform(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('xavier_uniform', [1, 2, 3], mindspore.float32)

class mindspore.common.initializer.Zero(**kwargs)[source]

Generates an array with constant value of zero in order to initialize a tensor.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, Zero
>>> tensor1 = initializer(Zero(), [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer('zeros', [1, 2, 3], mindspore.float32)

mindspore.common.initializer.initializer(init, shape=None, dtype=mstype.float32)[source]

Create and initialize a tensor.

Parameters
• init (Union[Tensor, str, Initializer, numbers.Number]) –

Initialize value.

• str: The init should be the alias of the class inheriting from Initializer and the corresponding class will be called in practice. The value of ‘init’ can be “normal”, “ones” or “zeros”, etc.

• Initializer: The init should be the class inheriting from Initializer to initialize tensor.

• numbers.Number: The Constant will be called to initialize tensor.

• shape (Union[tuple, list, int]) – The shape of the initialized tensor. Default: None.

• dtype (mindspore.dtype) – The type of data in initialized tensor. Default: mindspore.float32.

Returns

Tensor, return is Tensor object.

Raises
• TypeError – The type of the argument ‘init’ is not correct.

• ValueError – The shape of the tensor which is passed through ‘init’ is not the same as that passed by ‘shape’.

Examples

>>> import mindspore
>>> from mindspore.common.initializer import initializer, One
>>> tensor1 = initializer('ones', [1, 2, 3], mindspore.float32)
>>> tensor2 = initializer(One(), [1, 2, 3], mindspore.float32)
>>> tensor3 = initializer(0, [1, 2, 3], mindspore.float32)