mindspore.ops.operations

Primitive operator classes.

A collection of operators to build nerual networks or computing functions.

class mindspore.ops.operations.ACos(*args, **kwargs)[source]

Computes arccosine of input element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Examples

>>> acos = P.ACos()
>>> input_x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
>>> output = acos(input_x)
class mindspore.ops.operations.Abs(*args, **kwargs)[source]

Returns absolute value of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor. The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as the input_x.

Examples

>>> input_x = Tensor(np.array([-1.0, 1.0, 0.0]), mindspore.float32)
>>> abs = P.Abs()
>>> abs(input_x)
[1.0, 1.0, 0.0]
class mindspore.ops.operations.Acosh(*args, **kwargs)[source]

Compute inverse hyperbolic cosine of x element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\), and the data type of ‘input_x’ is number, the element in ‘input_x’ should be greater than or equal to 1.

Outputs:

Tensor, has the same shape as input_x.

Examples

>>> acosh = P.Acosh()
>>> input_x = Tensor(np.array([1.0, 1.5, 3.0, 100.0]), mindspore.float32)
>>> output = acosh(input_x)
class mindspore.ops.operations.Adam(*args, **kwargs)[source]

Updates gradients by Adaptive Moment Estimation (Adam) algorithm.

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

\[\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}\]

\(m\) represents the 1st moment vector, \(v\) represents the 2nd moment vector, \(g\) represents gradient, \(l\) represents scaling factor lr, \(\beta_1, \beta_2\) represent beta1 and beta2, \(t\) represents updating step while \(beta_1^t\) and \(beta_2^t\) represent beta1_power and beta2_power, \(\alpha\) represents learning_rate, \(w\) represents var, \(\epsilon\) represents epsilon.

Parameters
  • use_locking (bool) – Whether to enable a lock to protect updating variable tensors. If True, updating of the var, m, and v tensors will be protected by a lock. If False, the result is unpredictable. Default: False.

  • use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If True, updates the gradients using NAG. If False, updates the gradients without using NAG. Default: False.

Inputs:
  • var (Tensor) - Weights to be updated.

  • m (Tensor) - The 1st moment vector in the updating formula. Has the same type as var.

  • v (Tensor) - the 2nd moment vector in the updating formula. Mean square gradients, has the same type as var.

  • beta1_power (float) - \(beta_1^t\) in the updating formula.

  • beta2_power (float) - \(beta_2^t\) in the updating formula.

  • lr (Union[float, Tensor, Iterable]) - \(l\) in the updating formula. Iterable type is used for the dynamic learning rate.

  • beta1 (float) - The exponential decay rate for the 1st moment estimates.

  • beta2 (float) - The exponential decay rate for the 2nd moment estimates.

  • epsilon (float) - Term added to the denominator to improve numerical stability.

  • gradient (Tensor) - Gradients.

Outputs:

Tuple of 3 Tensor, the updated parameters.

  • var (Tensor) - The same shape and data type as var.

  • m (Tensor) - The same shape and data type as m.

  • v (Tensor) - The same shape and data type as v.

Examples

Please refer to the usage in nn.Adam.

class mindspore.ops.operations.AddN(*args, **kwargs)[source]

Computes addition of all input tensors element-wise.

All input tensors should have the same shape.

Inputs:
  • input_x (Union(tuple[Tensor], list[Tensor])) - The input tuple or list is made up of multiple tensors whose dtype is number or bool to be added together.

Outputs:

Tensor, has the same shape and dtype as each entry of the input_x.

Examples

>>> class NetAddN(nn.Cell):
>>>     def __init__(self):
>>>         super(NetAddN, self).__init__()
>>>         self.addN = P.AddN()
>>>
>>>     def construct(self, *z):
>>>         return self.addN(z)
>>>
>>> net = NetAddN()
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> input_y = Tensor(np.array([4, 5, 6]), mindspore.float32)
>>> net(input_x, input_y, input_x, input_y)
Tensor([10, 14, 18], shape=(3,), dtype=mindspore.int32)
class mindspore.ops.operations.AllGather(*args, **kwargs)[source]

Gathers tensors from the specified communication group.

Note

Tensor must have the same shape and format in all processes participating in the collective.

Parameters

group (str) – The communication group to work on. Default: “hccl_world_group”.

Raises
  • TypeError – If group is not a string.

  • ValueError – If the local rank id of the calling process in the group is larger than the group’s rank size.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor. If the number of devices in the group is N, then the shape of output is \((N, x_1, x_2, ..., x_R)\).

Examples

>>> from mindspore.communication import init
>>> import mindspore.ops.operations as P
>>> init('nccl')
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.allgather = P.AllGather(group="nccl_world_group")
>>>
>>>     def construct(self, x):
>>>         return self.allgather(x)
>>>
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
class mindspore.ops.operations.AllReduce(*args, **kwargs)[source]

Reduces the tensor data across all devices in such a way that all devices will get the same final result.

Note

The operation of AllReduce does not support “prod” currently. Tensor must have same shape and format in all processes participating in the collective.

Parameters
  • op (str) – Specifies an operation used for element-wise reductions, like sum, max, min. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “hccl_world_group”.

Raises
  • TypeError – If any of op and group is not a string or fusion is not a integer or the input’s dtype is bool.

  • ValueError – If op is “prod”

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the specified operation.

Examples

>>> from mindspore.communication import init
>>> import mindspore.ops.operations as P
>>> init('nccl')
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.allreduce_sum = P.AllReduce(ReduceOp.SUM, group="nccl_world_group")
>>>
>>>     def construct(self, x):
>>>         return self.allreduce_sum(x)
>>>
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
vm_impl(x)[source]

Implement by vm mode.

class mindspore.ops.operations.ApplyCenteredRMSProp(*args, **kwargs)[source]

Optimizer that implements the centered RMSProp algorithm. Please refer to the usage in source code of nn.RMSProp.

Note

Update var according to the centered RMSProp algorithm.

\[g_{t} = \rho g_{t-1} + (1 - \rho)\nabla Q_{i}(w)\]
\[s_{t} = \rho s_{t-1} + (1 - \rho)(\nabla Q_{i}(w))^2\]
\[m_{t} = \beta m_{t-1} + \frac{\eta} {\sqrt{s_{t} - g_{t}^2 + \epsilon}} \nabla Q_{i}(w)\]
\[w = w - m_{t}\]

where, \(w\) represents var, which will be updated. \(g_{t}\) represents mean_gradient, \(g_{t-1}\) is the last momentent of \(g_{t}\). \(s_{t}\) represents mean_square, \(s_{t-1}\) is the last momentent of \(s_{t}\), \(m_{t}\) represents moment, \(m_{t-1}\) is the last momentent of \(m_{t}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Parameters

use_locking (bool) – Enable a lock to protect the update of variable tensors. Default: False.

Inputs:
  • var (Tensor) - Weights to be update.

  • mean_gradient (Tensor) - Mean gradients, must have the same type as var.

  • mean_square (Tensor) - Mean square gradients, must have the same type as var.

  • moment (Tensor) - Delta of var, must have the same type as var.

  • grad (Tensor) - Gradients, must have the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate.

  • decay (float) - Decay rate.

  • momentum (float) - Momentum.

  • epsilon (float) - Ridge term.

Outputs:

Tensor, parameters to be update.

Examples

>>> centered_rms_prop = P.ApplyCenteredRMSProp()
>>> input_x = Tensor(np.random.randint(0, 256, (3, 3)),mindspore.float32)
>>> mean_grad = Tensor(np.random.randint(-8, 8, (3, 3)), mindspore.float32)
>>> mean_square = Tensor(np.random.randint(0, 256, (3, 3)), mindspore.float32)
>>> moment = Tensor(np.random.randn(3, 3), mindspore.float32)
>>> grad = Tensor(np.random.randint(-32, 16, (3, 3)), mindspore.float32 )
>>> learning_rate = 0.9
>>> decay = 0.0
>>> momentum = 1e-10
>>> epsilon = 0.001
>>> result = centered_rms_prop(input_x, mean_grad, mean_square, moment, grad,
>>>                    learning_rate, decay, momentum, epsilon)
class mindspore.ops.operations.ApplyFtrl(*args, **kwargs)[source]

Update relevant entries according to the FTRL scheme.

Parameters

use_locking (bool) – Use locks for update operation if True . Default: False.

Inputs:
  • var (Tensor): The variable to be updated.

  • accum (Tensor): The accum to be updated, must be same type and shape as var.

  • linear (Tensor): The linear to be updated, must be same type and shape as var.

  • grad (Tensor): Gradient.

  • lr (Union[Number, Tensor]): The learning rate value, must be positive. Default: 0.001.

  • l1 (Union[Number, Tensor]): l1 regularization strength, must be greater than or equal to zero. Default: 0.0.

  • l2 (Union[Number, Tensor]): l2 regularization strength, must be greater than or equal to zero. Default: 0.0.

  • lr_power (Union[Number, Tensor]): Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if lr_power is zero. Default: -0.5.

Outputs:

Tensor, representing the updated var.

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Parameter
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as P
>>> class ApplyFtrlNet(nn.Cell):
>>>     def __init__(self):
>>>         super(ApplyFtrlNet, self).__init__()
>>>         self.apply_ftrl = P.ApplyFtrl()
>>>         self.lr = 0.001
>>>         self.l1 = 0.0
>>>         self.l2 = 0.0
>>>         self.lr_power = -0.5
>>>         self.var = Parameter(Tensor(np.random.rand(3, 3).astype(np.float32)), name="var")
>>>         self.accum = Parameter(Tensor(np.random.rand(3, 3).astype(np.float32)), name="accum")
>>>         self.linear = Parameter(Tensor(np.random.rand(3, 3).astype(np.float32)), name="linear")
>>>
>>>     def construct(self, grad):
>>>         out = self.apply_ftrl(self.var, self.accum, self.linear, grad, self.lr, self.l1, self.l2,
>>>                               self.lr_power)
>>>         return out
>>>
>>> net = ApplyFtrlNet()
>>> input_x = Tensor(np.random.randint(-4, 4, (3, 3)), mindspore.float32)
>>> result = net(input_x)
[[0.67455846   0.14630564   0.160499  ]
 [0.16329421   0.00415689   0.05202988]
 [0.18672481   0.17418946   0.36420345]]
class mindspore.ops.operations.ApplyMomentum(*args, **kwargs)[source]

Optimizer that implements the Momentum algorithm.

Refer to the paper On the importance of initialization and momentum in deep learning for more details.

Parameters
  • use_locking (bool) – Enable a lock to protect the update of variable and accumlation tensors. Default: False.

  • use_nesterov (bool) – Enable Nesterov momentum. Default: False.

  • gradient_scale (float) – The scale of the gradient. Default: 1.0.

Inputs:
  • variable (Tensor) - Weights to be updated.

  • accumulation (Tensor) - Accumulated gradient value by moment weight.

  • learning_rate (float) - Learning rate.

  • gradient (Tensor) - Gradients.

  • momentum (float) - Momentum.

Outputs:

Tensor, parameters to be updated.

Examples

Please refer to the usage in nn.ApplyMomentum.

class mindspore.ops.operations.ApplyRMSProp(*args, **kwargs)[source]

Optimizer that implements the Root Mean Square prop(RMSProp) algorithm. Please refer to the usage in source code of nn.RMSProp.

Note

Update var according to the RMSProp algorithm.

\[s_{t} = \rho s_{t-1} + (1 - \rho)(\nabla Q_{i}(w))^2\]
\[m_{t} = \beta m_{t-1} + \frac{\eta} {\sqrt{s_{t} + \epsilon}} \nabla Q_{i}(w)\]
\[w = w - m_{t}\]

where, \(w\) represents var, which will be updated. \(s_{t}\) represents mean_square, \(s_{t-1}\) is the last momentent of \(s_{t}\), \(m_{t}\) represents moment, \(m_{t-1}\) is the last momentent of \(m_{t}\). \(\rho\) represents decay. \(\beta\) is the momentum term, represents momentum. \(\epsilon\) is a smoothing term to avoid division by zero, represents epsilon. \(\eta\) represents learning_rate. \(\nabla Q_{i}(w)\) represents grad.

Parameters

use_locking (bool) – Enable a lock to protect the update of variable tensors. Default: False.

Inputs:
  • var (Tensor) - Weights to be update.

  • mean_square (Tensor) - Mean square gradients, must have the same type as var.

  • moment (Tensor) - Delta of var, must have the same type as var.

  • grad (Tensor) - Gradients, must have the same type as var.

  • learning_rate (Union[Number, Tensor]) - Learning rate.

  • decay (float) - Decay rate.

  • momentum (float) - Momentum.

  • epsilon (float) - Ridge term.

Outputs:

Tensor, parameters to be update.

Examples

>>> apply_rms = P.ApplyRMSProp()
>>> input_x = Tensor(np.random.randint(0, 256, (3, 3)),mindspore.float32)
>>> mean_square = Tensor(np.random.randint(0, 256, (3, 3)), mindspore.float32)
>>> moment = Tensor(np.random.randn(3, 3), mindspore.float32)
>>> grad = Tensor(np.random.randint(-32, 16, (3, 3)), mindspore.float32 )
>>> learning_rate = 0.9
>>> decay = 0.0
>>> momentum = 1e-10
>>> epsilon = 0.001
>>> result = apply_rms(input_x, mean_square, moment, grad, learning_rate, decay, momentum, epsilon)
class mindspore.ops.operations.ArgMaxWithValue(*args, **kwargs)[source]

Calculates maximum value with corresponding index.

Calculates maximum value along with given axis for the input tensor. Returns the maximum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Parameters
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true the output will keep same dimension with the input, the output will reduce dimension if false. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\).

Outputs:

Tensor, corresponding index and maximum value of input tensor. If keep_dims is true, the output tensors shape is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Else, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Examples

>>> input_x = Tensor(np.random.rand(5))
>>> index, output = P.ArgMaxWithValue()(input_x)
class mindspore.ops.operations.ArgMinWithValue(*args, **kwargs)[source]

Calculates minimum value with corresponding index, return indices and values.

Calculates minimum value along with given axis for the input tensor. Returns the minimum values and indices.

Note

In auto_parallel and semi_auto_parallel mode, the first output index can not be used.

Parameters
  • axis (int) – The dimension to reduce. Default: 0.

  • keep_dims (bool) – Whether to reduce dimension, if true the output will keep same dimension as the input, the output will reduce dimension if false. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as \((x_1, x_2, ..., x_N)\).

Outputs:

Tensor, corresponding index and minimum value of input tensor. If keep_dims is true, the output tensors shape is \((x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)\). Else, the shape is \((x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Examples

>>> input_x = Tensor(np.random.rand(5))
>>> index, output = P.ArgMinWithValue()(input_x)
class mindspore.ops.operations.Argmax(*args, **kwargs)[source]

Returns the indices of the max value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the output tensor shape is \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters
  • axis (int) – Axis on which Argmax operation applies. Default: -1.

  • output_type (mindspore.dtype) – An optional data type of mindspore.dtype.int32. Default: mindspore.dtype.int32.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, indices of the max value of input tensor across the axis.

Examples

>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
>>> index = P.Argmax(output_type=mindspore.int32)(input_x)
class mindspore.ops.operations.Argmin(*args, **kwargs)[source]

Returns the indices of the min value of a tensor across the axis.

If the shape of input tensor is \((x_1, ..., x_N)\), the output tensor shape is \((x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)\).

Parameters
  • axis (int) – Axis on which Argmin operation applies. Default: -1.

  • output_type (mindspore.dtype) – An optional data type from: mindspore.dtype.int32, mindspore.dtype.int64. Default: mindspore.dtype.int64.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, indices of the min value of input tensor across the axis.

Examples

>>> input_x = Tensor(np.array([2.0, 3.1, 1.2]))
>>> index = P.Argmin()(input_x)
>>> assert index == Tensor(2, mindspore.int64)
class mindspore.ops.operations.Assign(*args, **kwargs)[source]

Assign Parameter with a value.

Inputs:
  • variable (Parameter) - The Parameter.

  • value (Tensor) - The value to assign.

Outputs:

Tensor, has the same type as original variable.

Examples

>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.y = mindspore.Parameter(Tensor([1.0], mindspore.float32), name="y")
>>>
>>>     def construct(self, x):
>>>         P.Assign()(self.y, x)
>>>         return x
>>> x = Tensor([2.0], mindspore.float32)
>>> net = Net()
>>> net(x)
class mindspore.ops.operations.AssignAdd(*args, **kwargs)[source]

Updates a Parameter by adding a value to it.

Inputs:
  • variable (Parameter) - The Parameter.

  • value (Union[numbers.Number, Tensor]) - The value to be added to the variable. It should have the same shape as variable if it is a Tensor.

Examples

>>> class Net(Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.AssignAdd = P.AssignAdd()
>>>         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int64), name="global_step")
>>>
>>>     def construct(self, x):
>>>         self.AssignAdd(self.variable, x)
>>>         return self.variable
>>>
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int64)*100)
>>> net(value)
class mindspore.ops.operations.AssignSub(*args, **kwargs)[source]

Updates a Parameter by subtracting a value from it.

Inputs:
  • variable (Parameter) - The Parameter.

  • value (Union[numbers.Number, Tensor]) - The value to be subtracted from the variable. It should have the same shape as variable if it is a Tensor.

Examples

>>> class Net(Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.AssignSub = P.AssignSub()
>>>         self.variable = mindspore.Parameter(initializer(1, [1], mindspore.int64), name="global_step")
>>>
>>>     def construct(self, x):
>>>         self.AssignSub(self.variable, x)
>>>         return self.variable
>>>
>>> net = Net()
>>> value = Tensor(np.ones([1]).astype(np.int64)*100)
>>> net(value)
class mindspore.ops.operations.Atan2(*args, **kwargs)[source]

Returns arctangent of input_x/input_y element-wise.

It returns \(\theta\ \in\ [-\pi, \pi]\) such that \(x = r*\sin(\theta), y = r*\cos(\theta)\), where \(r = \sqrt{x^2 + y^2}\).

Inputs:
  • input_x (Tensor) - The input tensor.

  • input_y (Tensor) - The input tensor.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as input_x.

Examples

>>> input_x = Tensor(np.array([[0, 1]]), mindspore.float32)
>>> input_y = Tensor(np.array([[1, 1]]), mindspore.float32)
>>> atan2 = P.Atan2()
>>> atan2(input_x, input_y)
[[0. 0.7853982]]
class mindspore.ops.operations.AvgPool(*args, **kwargs)[source]

Average pooling operation.

Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), AvgPool2d outputs regional average in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \frac{1}{h_{ker} * w_{ker}} \sum_{m=0}^{h_{ker}-1} \sum_{n=0}^{w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters
  • ksize (Union[int, tuple[int]]) – The size of kernel used to take the average value, is an int number that represents height and width are both ksize, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • padding (str) –

    The optional values for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. Output height and width will be the same as the input. Total number of padding will be calculated for horizontal and vertical direction and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possibly largest height and width of output will be return without padding. Extra pixels will be discarded.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as P
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.avgpool_op = P.AvgPool(padding="VALID", ksize=2, strides=1)
>>>
>>>     def construct(self, x):
>>>         result = self.avgpool_op(x)
>>>         return result
>>>
>>> input_x = Tensor(np.arange(1 * 3 * 3 * 4).reshape(1, 3, 3, 4), mindspore.float32)
>>> net = Net()
>>> result = net(input_x)
[[[[ 2.5   3.5   4.5]
   [ 6.5   7.5   8.5]]
[[ 14.5 15.5 16.5]

[ 18.5 19.5 20.5]]

[[ 26.5 27.5 28.5]

[ 30.5 31.5 32.5]]]]

class mindspore.ops.operations.BNTrainingReduce(*args, **kwargs)[source]

reduce sum at axis [0, 2, 3].

Inputs:
  • x (Tensor) - Tensor of shape \((N, C)\).

Outputs:
  • x_sum (Tensor) - Tensor has the same shape as x.

  • x_square_sum (Tensor) - Tensor has the same shape as x.

class mindspore.ops.operations.BatchMatMul(*args, **kwargs)[source]

Computes matrix multiplication between two tensors by batch

result[…, :, :] = tensor(a[…, :, :]) * tensor(b[…, :, :]).

The two input tensors must have same rank and the rank must be 3 at least.

Parameters
  • transpose_a (bool) – If True, a is transposed on the last two dimensions before multiplication. Default: False.

  • transpose_b (bool) – If True, b is transposed on the last two dimensions before multiplication. Default: False.

Inputs:
  • input_x (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((*B, N, C)\), where \(*B\) represents the batch size which can be multidimensional, \(N\) and \(C\) are the size of the last two dimensions. If transpose_a is True, its shape should be \((*B, C, N)\).

  • input_y (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((*B, C, M)\). If transpose_b is True, its shape should be \((*B, M, C)\).

Outputs:

Tensor, the shape of the output tensor is \((*B, N, M)\).

Examples

>>> input_x = Tensor(np.ones(shape=[2, 4, 1, 3]), mindspore.float32)
>>> input_y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = P.BatchMatMul()
>>> output = batmatmul(input_x, input_y)
>>>
>>> input_x = Tensor(np.ones(shape=[2, 4, 3, 1]), mindspore.float32)
>>> input_y = Tensor(np.ones(shape=[2, 4, 3, 4]), mindspore.float32)
>>> batmatmul = P.BatchMatMul(transpose_a=True)
>>> output = batmatmul(input_x, input_y)
class mindspore.ops.operations.BatchNorm(*args, **kwargs)[source]

Batch Normalization for input data and updated parameters.

Batch Normalization is widely used in convolutional neural networks. This operation applies Batch Normalization over input to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the features using a mini-batch of data and the learned parameters which can be described in the following formula,

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters
  • is_training (bool) – If is_training is True, mean and variance are computed during training. If is_training is False, they’re loaded from checkpoint during inference. Default: False.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-5.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, C)\).

  • scale (Tensor) - Tensor of shape \((C,)\).

  • bias (Tensor) - Tensor of shape \((C,)\).

  • mean (Tensor) - Tensor of shape \((C,)\).

  • variance (Tensor) - Tensor of shape \((C,)\).

Outputs:

Tuple of 5 Tensor, the normalized inputs and the updated parameters.

  • output_x (Tensor) - The same type and shape as the input_x. The shape is \((N, C)\).

  • updated_scale (Tensor) - Tensor of shape \((C,)\).

  • updated_bias (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_1 (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_2 (Tensor) - Tensor of shape \((C,)\).

  • reserve_space_3 (Tensor) - Tensor of shape \((C,)\).

Examples

>>> input_x = Tensor(np.ones([128, 64, 32, 64]), mindspore.float32)
>>> scale = Tensor(np.ones([64]), mindspore.float32)
>>> bias = Tensor(np.ones([64]), mindspore.float32)
>>> mean = Tensor(np.ones([64]), mindspore.float32)
>>> variance = Tensor(np.ones([64]), mindspore.float32)
>>> batch_norm = P.BatchNorm()
>>> output = batch_norm(input_x, scale, bias, mean, variance
class mindspore.ops.operations.BatchNormFold(*args, **kwargs)[source]

Batch normalization folded.

Parameters
  • momentum (float) – Momentum value should be [0, 1]. Default: 0.1.

  • epsilon (float) – A small float number to avoid dividing by 0. 1e-5 if dtype in float32 else 1e-3. Default: 1e-5.

  • is_training (bool) – In training mode set True, else set False. Default: True.

  • freeze_bn (int) – Delay in steps at which computation switches from regular batch norm to frozen mean and std. Default: 0.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C)\).

  • mean (Tensor) - Tensor of shape \((C,)\).

  • variance (Tensor) - Tensor of shape \((C,)\).

  • global_step (Tensor) - Tensor to record current global step.

Outputs:

Tuple of 4 Tensor, the normalized input and the updated parameters.

  • batch_mean (Tensor) - Tensor of shape \((C,)\).

  • batch_std (Tensor) - Tensor of shape \((C,)\).

  • running_mean (Tensor) - Tensor of shape \((C,)\).

  • running_std (Tensor) - Tensor of shape \((C,)\).

Examples

>>> batch_norm_fold = P.BatchNormFold()
>>> input_x = Tensor(np.array([1, 2, -1, -2, -2, 1]).reshape(2, 3), mindspore.float32)
>>> mean = Tensor(np.array([0.5, -1, 1,]), mindspore.float32)
>>> variance = Tensor(np.array([0.36, 0.4, 0.49]), mindspore.float32)
>>> global_step = Tensor(np.arange(6), mindspore.int32)
>>> batch_mean, batch_std, running_mean, running_std = batch_norm_fold(input_x, mean, variance, global_step)
class mindspore.ops.operations.BatchNormFold2(*args, **kwargs)[source]

Scale the bias with a correction factor to the long term statistics prior to quantization. This ensures that there is no jitter in the quantized bias due to batch to batch variation.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C)\).

  • beta (Tensor) - Tensor of shape \((C,)\).

  • gamma (Tensor) - Tensor of shape \((C,)\).

  • batch_std (Tensor) - Tensor of shape \((C,)\).

  • batch_mean (Tensor) - Tensor of shape \((C,)\).

  • running_std (Tensor) - Tensor of shape \((C,)\).

  • running_mean (Tensor) - Tensor of shape \((C,)\).

  • global_step (Tensor) - Tensor to record current global step.

Outputs:
  • y (Tensor) - Tensor has the same shape as x.

Examples

>>> batch_norm_fold2 = P.BatchNormFold2()
>>> input_x = Tensor(np.random.randint(-6, 6, (4, 3)), mindspore.float32)
>>> beta = Tensor(np.array([0.2, -0.1, 0.25]), mindspore.float32)
>>> gamma = Tensor(np.array([-0.1, -0.25, 0.1]), mindspore.float32)
>>> batch_std = Tensor(np.array([0.1, 0.2, 0.1]), mindspore.float32)
>>> batch_mean = Tensor(np.array([0, 0.05, 0.2]), mindspore.float32)
>>> running_std = Tensor(np.array([0.1, 0.1, 0.3]), mindspore.float32)
>>> running_mean = Tensor(np.array([-0.1, 0, -0.1]), mindspore.float32)
>>> global_step = Tensor(np.random.randint(1, 8, (8, )), mindspore.int32)
>>> result = batch_norm_fold2(input_x, beta, gamma, batch_std, batch_mean,
>>>                           running_std, running_mean, global_step)
class mindspore.ops.operations.BatchNormFold2Grad(*args, **kwargs)[source]

Performs grad of CorrectionAddGrad operation.

Examples

>>> bnf2_grad = P.BatchNormFold2Grad()
>>> input_x = Tensor(np.arange(3*3*12*12).reshape(6, 3, 6, 12), mindspore.float32)
>>> dout = Tensor(np.random.randint(-32, 32, (6, 3, 6, 12)), mindspore.float32)
>>> gamma = Tensor(np.random.randint(-4, 4, (3, 1, 1, 2)), mindspore.float32)
>>> batch_std = Tensor(np.random.randint(0, 8, (3, 1, 1, 2)), mindspore.float32)
>>> batch_mean = Tensor(np.random.randint(-6, 6, (3, 1, 1, 2)), mindspore.float32)
>>> running_std = Tensor(np.linspace(0, 2, 6).reshape(3, 1, 1, 2), mindspore.float32)
>>> running_mean = Tensor(np.random.randint(-3, 3, (3, 1, 1, 2)), mindspore.float32)
>>> global_step = Tensor(np.array([-2]), mindspore.int32)
>>> result = bnf2_grad(dout, input_x, gamma, batch_std, batch_mean, running_std, running_mean, global_step)
class mindspore.ops.operations.BatchNormFold2_D(*args, **kwargs)[source]

Scale the bias with a correction factor to the long term statistics prior to quantization. This ensures that there is no jitter in the quantized bias due to batch to batch variation.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C)\).

  • beta (Tensor) - Tensor of shape \((C,)\).

  • gamma (Tensor) - Tensor of shape \((C,)\).

  • batch_std (Tensor) - Tensor of shape \((C,)\).

  • batch_mean (Tensor) - Tensor of shape \((C,)\).

  • running_std (Tensor) - Tensor of shape \((C,)\).

  • running_mean (Tensor) - Tensor of shape \((C,)\).

  • global_step (Tensor) - Tensor to record current global step.

Outputs:
  • y (Tensor) - Tensor has the same shape as x.

class mindspore.ops.operations.BatchNormFoldD(*args, **kwargs)[source]

Performs grad of _BatchNormFold operation.

class mindspore.ops.operations.BatchNormFoldGrad(*args, **kwargs)[source]

Performs grad of BatchNormFold operation.

Examples

>>> batch_norm_fold_grad = P.BatchNormFoldGrad()
>>> d_batch_mean = Tensor(np.random.randint(-2., 2., (1, 2, 2, 3)), mindspore.float32)
>>> d_batch_std = Tensor(np.random.randn(1, 2, 2, 3), mindspore.float32)
>>> input_x = Tensor(np.random.randint(0, 256, (4, 1, 4, 6)), mindspore.float32)
>>> batch_mean = Tensor(np.random.randint(-8., 8., (1, 2, 2, 3)), mindspore.float32)
>>> batch_std = Tensor(np.random.randint(0, 12, (1, 2, 2, 3)), mindspore.float32)
>>> global_step = Tensor([2], mindspore.int32)
>>> result = batch_norm_fold_grad(d_batch_mean, d_batch_std, input_x, batch_mean, batch_std, global_step)
class mindspore.ops.operations.BatchToSpace(*args, **kwargs)[source]

Divide batch dimension with blocks and interleaves these blocks back into spatial dimensions.

This operation will divide batch dimension N into blocks with block_size, the output tensor’s N dimension is the corresponding number of blocks after division. The output tensor’s H, W dimension is product of original H, W dimension and block_size with given amount to crop from dimension, respectively.

Parameters
  • block_size (int) – The block size of dividing block with value >= 1.

  • crops (list) – The crop value for H and W dimension, containing 2 sub list, each containing 2 int value. All values must be >= 0. crops[i] specifies the crop values for spatial dimension i, which corresponds to input dimension i+2. It is required that input_shape[i+2]*block_size >= crops[i][0]+crops[i][1].

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, the output tensor with the same type as input. Assume input shape is (n, c, h, w) with block_size and crops. The output shape will be (n’, c’, h’, w’), where

\(n' = n//(block\_size*block\_size)\)

\(c' = c\)

\(h' = h*block\_size-crops[0][0]-crops[0][1]\)

\(w' = w*block\_size-crops[1][0]-crops[1][1]\)

Examples

>>> block_size = 2
>>> crops = [[0, 0], [0, 0]]
>>> op = P.BatchToSpace(block_size, crops)
>>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
>>> output = op(input_x)
[[[[1., 2.], [3., 4.]]]]
class mindspore.ops.operations.BiasAdd(*args, **kwargs)[source]

Returns sum of input and bias tensor.

Adds the 1-D bias tensor to the input tensor, and broadcasts the shape on all axis except for the channel axis.

Inputs:
  • input_x (Tensor) - Input value, with shape \((N, C)\) or \((N, C, H, W)\).

  • bias (Tensor) - Bias value, with shape \((C)\).

Outputs:

Tensor, with the same shape and type as input_x.

Examples

>>> input_x = Tensor(np.arange(6).reshape((2, 3)), mindspore.float32)
>>> bias = Tensor(np.random.random(3).reshape((3,)), mindspore.float32)
>>> bias_add = P.BiasAdd()
>>> bias_add(input_x, bias)
class mindspore.ops.operations.BinaryCrossEntropy(*args, **kwargs)[source]

Computes the Binary Cross Entropy between the target and the output.

Note

Sets input as \(x\), input label as \(y\), output as \(\ell(x, y)\). Let,

\[L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\]

Then,

\[\begin{split}\ell(x, y) = \begin{cases} L, & \text{if reduction} = \text{'none';}\\ \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
Parameters

reduction (str) – Specifies the reduction to apply to the output. Its value should be one of ‘none’, ‘mean’, ‘sum’. Default: ‘mean’.

Inputs:
  • input_x (Tensor) - The input Tensor.

  • input_y (Tensor) - The label Tensor which has same shape as input_x.

  • weight (Tensor, optional) - A rescaling weight applied to the loss of each batch element. And it should have same shape as input_x. Default: None.

Outputs:

Tensor or Scalar, if reduction is ‘none’, then output is a tensor and same shape as input_x. Otherwise it is a scalar.

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as P
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.binary_cross_entropy = P.BinaryCrossEntropy()
>>>     def construct(self, x, y, weight):
>>>         result = self.binary_cross_entropy(x, y, weight)
>>>         return result
>>>
>>> net = Net()
>>> input_x = Tensor(np.array([0.2, 0.7, 0.1]), mindspore.float32)
>>> input_y = Tensor(np.array([0., 1., 0.]), mindspore.float32)
>>> weight = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> result = net(input_x, input_y, weight)
0.38240486
class mindspore.ops.operations.BoundingBoxDecode(*args, **kwargs)[source]

Decode bounding boxes locations.

Parameters
  • means (tuple) – The means of deltas calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – The standard deviations of deltas calculation. Default: (1.0, 1.0, 1.0, 1.0).

  • max_shape (tuple) – The max size limit for decoding box calculation.

  • wh_ratio_clip (float) – The limit of width and height ratio for decoding box calculation. Default: 0.016.

Inputs:
  • anchor_box (Tensor) - Anchor boxes.

  • deltas (Tensor) - Delta of boxes.

Outputs:

Tensor, decoded boxes.

Examples

>>> anchor_box = Tensor([[4,1,2,1],[2,2,2,3]],mindspore.float32)
>>> deltas = Tensor([[3,1,2,2],[1,2,1,4]],mindspore.float32)
>>> boundingbox_decode = P.BoundingBoxDecode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0),
>>>                                          max_shape=(768, 1280), wh_ratio_clip=0.016)
>>> boundingbox_decode(anchor_box, deltas)
[[4.1953125  0.  0.  5.1953125]
 [2.140625  0.  3.859375  60.59375]]
class mindspore.ops.operations.BoundingBoxEncode(*args, **kwargs)[source]

Encode bounding boxes locations.

Parameters
  • means (tuple) – Means for encoding bounding boxes calculation. Default: (0.0, 0.0, 0.0, 0.0).

  • stds (tuple) – Stds for encoding bounding boxes calculation. Default: (1.0, 1.0, 1.0, 1.0).

Inputs:
  • anchor_box (Tensor) - Anchor boxes.

  • groundtruth_box (Tensor) - Ground truth boxes.

Outputs:

Tensor, encoded bounding boxes.

Examples

>>> anchor_box = Tensor([[4,1,2,1],[2,2,2,3]],mindspore.float32)
>>> groundtruth_box = Tensor([[3,1,2,2],[1,2,1,4]],mindspore.float32)
>>> boundingbox_encode = P.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))
>>> boundingbox_encode(anchor_box, groundtruth_box)
[[5.0000000e-01  5.0000000e-01  -6.5504000e+04  6.9335938e-01]
 [-1.0000000e+00  2.5000000e-01  0.0000000e+00  4.0551758e-01]]
class mindspore.ops.operations.Broadcast(*args, **kwargs)[source]

Broadcasts the tensor to the whole group.

Note

Tensor must have the same shape and format in all processes participating in the collective.

Parameters
  • root_rank (int) – Source rank. Required in all processes except the one that is sending the data.

  • group (str) – The communication group to work on. Default: “hccl_world_group”.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape of the input, i.e., \((x_1, x_2, ..., x_R)\). The contents depend on the data of the root_rank device.

Raises

TypeError – If root_rank is not a integer or group is not a string.

Examples

>>> from mindspore.communication import init
>>> import mindspore.ops.operations as P
>>> init('nccl')
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.broadcast = P.Broadcast(1)
>>>
>>>     def construct(self, x):
>>>         return self.broadcast((x,))
>>>
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
class mindspore.ops.operations.CTCLoss(*args, **kwargs)[source]

Calculates the CTC(Connectionist Temporal Classification) loss. Also calculates the gradient.

Parameters
  • preprocess_collapse_repeated (bool) – If True, repeated labels are collapsed prior to the CTC calculation. Default: False.

  • ctc_merge_repeated (bool) – If False, during CTC calculation, repeated non-blank labels will not be merged and are interpreted as individual labels. This is a simplfied version if CTC. Default: True.

  • ignore_longer_outputs_than_inputs (bool) – If True, sequences with longer outputs than inputs will be ignored. Default: False.

Inputs:
  • inputs (Tensor) - The input Tensor should be a 3-D tensor whose shape is \((max_time, batch_size, num_class)\). num_class should be num_labels + 1 classes, num_labels indicates the number of actual labels. Blank labels are reserved.

  • labels_indices (Tensor) - The indices of labels. labels_indices[i, :] == [b, t] means labels_values[i] stores the id for (batch b, time t). The type must be int64 and rank must be 2.

  • labels_values (Tensor) - A 1-D input tensor. The values associated with the given batch and time. The type must be int32. labels_values[i] must in the range of [0, num_class).

  • sequence_length (Tensor) - A tensor containing sequence lengths with the shape of \((batch_size)\). The type must be int32. Each value in the tensor should not greater than max_time.

Outputs:
  • loss (Tensor) - A tensor containing log-probabilities, the shape is \((batch_size)\). Has the same type with inputs.

  • gradient (Tensor) - The gradient of loss. Has the same type and shape with inputs.

Examples

>>> inputs = Tensor(np.random.random((2, 2, 3)), mindspore.float32)
>>> labels_indices = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int64)
>>> labels_values = Tensor(np.array([2, 2]), mindspore.int32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)
>>> ctc_loss = P.CTCloss()
>>> output = ctc_loss(inputs, labels_indices, labels_values, sequence_length)
class mindspore.ops.operations.Cast(*args, **kwargs)[source]

Returns a tensor with the new specified data type.

Inputs:
  • input_x (Union[Tensor, Number]) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The tensor to be casted.

  • type (dtype.Number) - The valid data type of the output tensor. Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is \((x_1, x_2, ..., x_R)\), same as input_x.

Examples

>>> input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
>>> input_x = Tensor(input_np)
>>> type_dst = mindspore.float16
>>> cast = P.Cast()
>>> result = cast(input_x, type_dst)
class mindspore.ops.operations.CheckBprop(*args, **kwargs)[source]

Checks whether data type and shape of corresponding element from tuple x and y are the same.

Raises

TypeError – If not the same.

Inputs:
  • input_x (tuple[Tensor]) - The input_x contains the outputs of bprop to be checked.

  • input_y (tuple[Tensor]) - The input_y contains the inputs of bprop to check against.

Outputs:

(tuple[Tensor]), the input_x, if data type and shape of corresponding elements from input_x and input_y are the same.

Examples

>>> input_x = (Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32),)
>>> input_y = (Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32),)
>>> out = P.CheckBprop()(input_x, input_y)
class mindspore.ops.operations.CheckValid(*args, **kwargs)[source]

Check bounding box.

Check whether the bounding box cross data and data border.

Inputs:
  • bboxes (Tensor) - Bounding boxes tensor with shape (N, 4).

  • img_metas (Tensor) - Raw image size information, format (height, width, ratio).

Outputs:

Tensor, the valided tensor.

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as P
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.check_valid = P.CheckValid()
>>>     def construct(self, x, y):
>>>         valid_result = self.check_valid(x, y)
>>>         return valid_result
>>>
>>> bboxes = Tensor(np.linspace(0, 6, 12).reshape(3, 4), mindspore.float32)
>>> img_metas = Tensor(np.array([2, 1, 3]), mindspore.float32)
>>> net = Net()
>>> result = net(bboxes, img_metas)
[True   False   False]
class mindspore.ops.operations.Concat(*args, **kwargs)[source]

Concat tensor in specified axis.

Concat input tensors along with the given axis.

Note

The input data is a tuple of tensors. These tensors have the same rank R. Set the given axis as m, and \(0 \le m < N\). Set the number of input tensors as N. For the \(i\)-th tensor \(t_i\) has the shape \((x_1, x_2, ..., x_{mi}, ..., x_R)\). \(x_{mi}\) is the \(m\)-th dimension of the \(i\)-th tensor. Then, the output tensor shape is

\[(x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\]
Parameters

axis (int) – The specified axis. Default: 0.

Inputs:
  • input_x (tuple, list) - Tuple or list of input tensors.

Outputs:

Tensor, the shape is \((x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)\).

Examples

>>> data1 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> data2 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> op = P.Concat()
>>> output = op((data1, data2))
class mindspore.ops.operations.ConfusionMulGrad(*args, **kwargs)[source]

output0 is the result of which input0 dot multily input1.

output1 is the result of which input0 dot multily input1, then reducesum it.

Parameters
  • axis (Union[int, tuple[int], list[int]]) – The dimensions to reduce. Default:(), reduce all dimensions. Only constant value is allowed.

  • keep_dims (bool) –

    • If true, keep these reduced dimensions and the length is 1.

    • If false, don’t keep these dimensions. Default:False.

Inputs:
  • input_0 (Tensor) - The input Tensor.

  • input_1 (Tensor) - The input Tensor.

  • input_2 (Tensor) - The input Tensor.

outputs:
  • output_0 (Tensor) - The same shape with input0.

  • output_1 (Tensor)

    • If axis is (), and keep_dims is false, the output is a 0-D array representing the sum of all elements in the input array.

    • If axis is int, set as 2, and keep_dims is false, the shape of output is \((x_1,x_3,...,x_R)\).

    • If axis is tuple(int), set as (2,3), and keep_dims is false, the shape of output is \((x_1,x_4,...x_R)\).

Examples

>>> confusion_mul_grad = P.ConfusionMulGrad()
>>> input_0 = Tensor(np.random.randint(-2, 2, (2, 3)), mindspore.float32)
>>> input_1 = Tensor(np.random.randint(0, 4, (2, 3)), mindspore.float32)
>>> input_2 = Tensor(np.random.randint(-4, 0, (2, 3)), mindspore.float32)
>>> output_0, output_1 = confusion_mul_grad(input_0, input_1, input_2)
output_0:
    [[ 3.   1.   0.]
     [-6.   2.  -2.]]
output_1:
    -3.0
class mindspore.ops.operations.ControlDepend(*args, **kwargs)[source]

Adds control dependency relation between source and destination operation.

In many cases, we need to control the execution order of operations. ControlDepend is designed for this. ControlDepend will indicate the execution engine to run the operations in specific order. ControlDepend tells the engine that the destination operations should depend on the source operation which means the source operations should be executed before the destination.

Parameters

depend_mode (int) – Use 0 for normal depend, 1 for depend on operations that used the parameter. Default: 0.

Inputs:
  • src (Any) - The source input. It can be a tuple of operations output or a single operation output. We do not concern about the input data, but concern about the operation that generates the input data. If depend_mode = 1 is specified and the source input is parameter, we will try to find the operations that used the parameter as input.

  • dst (Any) - The destination input. It can be a tuple of operations output or a single operation output. We do not concern about the input data, but concern about the operation that generates the input data. If depend_mode = 1 is specified and the source input is parameter, we will try to find the operations that used the parameter as input.

Outputs:

Bool. This operation has no actual data output, it will be used to setup the order of relative operations.

Examples

>>> # In the following example, the data calculation uses original global_step. After the calculation the global
>>> # step should be increased, so the add operation should depend on the data calculation operation.
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.control_depend = P.ControlDepend()
>>>         self.softmax = P.Softmax()
>>>
>>>     def construct(self, x, y):
>>>         mul = x * y
>>>         softmax = self.softmax(x)
>>>         ret = self.control_depend(mul, softmax)
>>>         return ret
>>> x = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> y = Tensor(np.ones([4, 5]), dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
class mindspore.ops.operations.Conv2D(*args, **kwargs)[source]

2D convolution layer.

Applies a 2D convolution over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size and \(C_{in}\) is channel number. For each batch of shape \((C_{in}, H_{in}, W_{in})\), the formula is defined as:

\[out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,\]

where \(ccor\) is cross correlation operator, \(C_{in}\) is the input channel number, \(j\) ranges from \(0\) to \(C_{out} - 1\), \(W_{ij}\) corresponds to \(i\)-th channel of the \(j\)-th filter and \(out_{j}\) corresponds to the \(j\)-th channel of the output. \(W_{ij}\) is a slice of kernel and it has shape \((\text{ks_h}, \text{ks_w})\), where \(\text{ks_h}\) and \(\text{ks_w}\) are height and width of the convolution kernel. The full kernel has shape \((C_{out}, C_{in} // \text{group}, \text{ks_h}, \text{ks_w})\), where group is the group number to split the input in the channel dimension.

If the ‘pad_mode’ is set to be “valid”, the output height and width will be \(\left \lfloor{1 + \frac{H_{in} + 2 \times \text{padding} - \text{ks_h} - (\text{ks_h} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) and \(\left \lfloor{1 + \frac{W_{in} + 2 \times \text{padding} - \text{ks_w} - (\text{ks_w} - 1) \times (\text{dilation} - 1) }{\text{stride}}} \right \rfloor\) respectively.

The first introduction can be found in paper Gradient Based Learning Applied to Document Recognition. More detailed introduction can be found here: http://cs231n.github.io/convolutional-networks/.

Parameters
  • out_channel (int) – The dimension of the output.

  • kernel_size (Union[int, tuple[int]]) – The kernel size of the 2D convolution.

  • mode (int) – 0 Math convolutiuon, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 1.

  • pad_mode (str) – “valid”, “same”, “pad” the mode to fill padding. Default: “valid”.

  • pad (int) – The pad value to fill. Default: 0.

  • stride (Union(int, tuple[int])) – The stride to apply conv filter. Default: 1.

  • dilation (Union(int, tuple[int])) – Specify the space to use between kernel elements. Default: 1.

  • group (int) – Split input into groups. Default: 1.

Returns

Tensor, the value that applied 2D convolution.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) - Set size of kernel is \((K_1, K_2)\), then the shape is \((C_{out}, C_{in}, K_1, K_2)\).

Outputs:

Tensor of shape \((N, C_{out}, H_{out}, W_{out})\).

Examples

>>> input = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> conv2d = P.Conv2D(out_channel=32, kernel_size=3)
>>> conv2d(input, weight)
class mindspore.ops.operations.Conv2DBackpropInput(*args, **kwargs)[source]

Computes the gradients of convolution with respect to the input.

Parameters
  • out_channel (int) – The dimensionality of the output space.

  • kernel_size (Union[int, tuple[int]]) – The size of the convolution window.

  • pad_mode (str) – “valid”, “same”, “pad” the mode to fill padding. Default: “valid”.

  • pad (int) – The pad value to fill. Default: 0.

  • mode (int) – 0 Math convolutiuon, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 1.

  • stride (Union[int. tuple[int]]) – The stride to apply conv filter. Default: 1.

  • dilation (Union[int. tuple[int]]) – Specifies the dilation rate to use for dilated convolution. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

Returns

Tensor, the gradients of convolution.

Examples

>>> dout = Tensor(np.ones([10, 32, 30, 30]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
>>> x = Tensor(np.ones([10, 32, 32, 32]))
>>> conv2d_backprop_input = P.Conv2DBackpropInput(out_channel=32, kernel_size=3)
>>> conv2d_backprop_input(dout, weight, F.shape(x))
class mindspore.ops.operations.CorrectionMul(*args, **kwargs)[source]

Scale the weights with a correction factor to the long term statistics prior to quantization. This ensures that there is no jitter in the quantized weights due to batch to batch variation.

Inputs:
  • x (Tensor) - Tensor of shape \((N, C)\).

  • batch_std (Tensor) - Tensor of shape \((C,)\).

  • running_std (Tensor) - Tensor of shape \((C,)\).

Outputs:
  • out (Tensor) - Tensor has the same shape as x.

Examples

>>> correction_mul = P.CorrectionMul()
>>> input_x = Tensor(np.random.randint(-8, 12, (3, 4)), mindspore.float32)
>>> batch_std = Tensor(np.array([1.5, 3, 2]), mindspore.float32)
>>> running_std = Tensor(np.array([2, 1.2, 0.5]), mindspore.float32)
>>> out = correction_mul(input_x, batch_std, running_std)
class mindspore.ops.operations.CorrectionMulGrad(*args, **kwargs)[source]

Performs grad of CorrectionMul operation.

Examples

>>> correction_mul_grad = P.CorrectionMulGrad()
>>> dout = Tensor(np.array([1.5, -2.2, 0.7, -3, 1.6, 2.8]).reshape(2, 1, 1, 3), mindspore.float32)
>>> input_x = Tensor(np.random.randint(0, 256, (2, 1, 1, 3)), mindspore.float32)
>>> gamma = Tensor(np.array([0.2, -0.2, 2.5, -1.]).reshape(2, 1, 2), mindspore.float32)
>>> running_std = Tensor(np.array([1.2, 0.1, 0.7, 2.3]).reshape(2, 1, 2), mindspore.float32)
>>> result = correction_mul_grad(dout, input_x, gamma, running_std)
class mindspore.ops.operations.Cos(*args, **kwargs)[source]

Computes cosine of input element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Examples

>>> cos = P.Cos()
>>> input_x = Tensor(np.array([0.24, 0.83, 0.31, 0.09]), mindspore.float32)
>>> output = cos(input_x)
class mindspore.ops.operations.CumProd(*args, **kwargs)[source]

Compute the cumulative product of the tensor x along axis.

Parameters
  • exclusive (bool) – If True, perform exclusive cumulative product. Default: False.

  • reverse (bool) – If True, reverse the result along axis. Default: False

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (int) - The dimensions to compute the cumulative product.

Outputs:

Tensor, has the same shape and dtype as the ‘input_x’.

Examples

>>> input_x = Tensor(np.array([a, b, c]).astype(np.float32))
>>> op0 = P.CumProd()
>>> output = op0(input_x, 0) # output=[a, a * b, a * b * c]
>>> op1 = P.CumProd(exclusive=True)
>>> output = op1(input_x, 0) # output=[1, a, a * b]
>>> op2 = P.CumProd(reverse=True)
>>> output = op2(input_x, 0) # output=[a * b * c, b * c, c]
>>> op3 = P.CumProd(exclusive=True, reverse=True)
>>> output = op3(input_x, 0) # output=[b * c, c, 1]
class mindspore.ops.operations.CumSum(*args, **kwargs)[source]

Computes the cumulative sum of input tensor along axis.

Parameters
  • exclusive (bool) – If True, perform exclusive mode. Default: False.

  • reverse (bool) – If True, perform inverse cumulative sum. Default: False.

Inputs:
  • input (Tensor) - The input tensor to accumulate.

  • axis (int) - The axis to accumulate the tensor’s value.

Outputs:

Tensor, the shape of the output tensor is consistent with the input tensor’s.

Examples

>>> input = Tensor(np.array([[3, 4, 6, 10],[1, 6, 7, 9],[4, 3, 8, 7],[1, 3, 7, 9]]).astype(np.float32))
>>> cumsum = P.CumSum()
>>> output = cumsum(input, 1)
[[ 3.  7. 13. 23.]
 [ 1.  7. 14. 23.]
 [ 4.  7. 15. 22.]
 [ 1.  4. 11. 20.]]
class mindspore.ops.operations.DType(*args, **kwargs)[source]

Returns the data type of input tensor as mindspore.dtype.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

mindspore.dtype, the data type of a tensor.

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> type = P.DType()(input_tensor)
class mindspore.ops.operations.DepthToSpace(*args, **kwargs)[source]

Rearrange blocks of depth data into spatial dimensions.

This is the reverse operation of SpaceToDepth.

The output tensor’s height dimension is \(height * block\_size\).

The output tensor’s weight dimension is \(weight * block\_size\).

The depth of output tensor is \(input\_depth / (block\_size * block\_size)\).

The input tensor’s depth must be divisible by block_size * block_size. The data format is “NCHW”.

Parameters

block_size (int) – The block size used to divide depth data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor.

Outputs:

Tensor, the same type as x.

Examples

>>> x = Tensor(np.random.rand(1,12,1,1), mindspore.float32)
>>> block_size = 2
>>> op = P.DepthToSpace(block_size)
>>> output = op(x)
>>> output.asnumpy().shape == (1,3,2,2)
class mindspore.ops.operations.DepthwiseConv2dNative(*args, **kwargs)[source]

Returns the depth-wise convolution value for the input.

Applies depthwise conv2d for the input, which will generate more channels with channel_multiplier. Given an input tensor of shape \((N, C_{in}, H_{in}, W_{in})\) where \(N\) is the batch size and a filter tensor with kernel size \((ks_{h}, ks_{w})\), containing \(C_{in} * \text{channel_multiplier}\) convolutional filters of depth 1; it applies different filters to each input channel (channel_multiplier channels for each with default value 1), then concatenates the results together. The output has \(\text{in_channels} * \text{channel_multiplier}\) channels.

Parameters
  • channel_multiplier (int) – The multipiler for the original output conv.

  • kernel_size (Union[int, tuple[int]]) – The size of the conv kernel.

  • mode (int) – 0 Math convolution, 1 cross-correlation convolution , 2 deconvolution, 3 depthwise convolution. Default: 3.

  • pad_mode (str) – “valid”, “same”, “pad” the mode to fill padding. Default: “valid”.

  • pad (int) – The pad value to fill. Default: 0.

  • stride (Union[int, tuple[int]]) – The stride to apply conv filter. Default: 1.

  • dilation (Union[int, tuple[int]]) – Specifies the dilation rate to use for dilated convolution. Default: 1.

  • group (int) – Splits input into groups. Default: 1.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

  • weight (Tensor) - Set size of kernel is \((K_1, K_2)\), then the shape is \((K, C_{in}, K_1, K_2)\), K must be 1.

Outputs:

Tensor of shape \((N, C_{in} * \text{channel_multiplier}, H_{out}, W_{out})\).

Examples

>>> input = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([1, 32, 3, 3]), mindspore.float32)
>>> depthwise_conv2d = P.DepthwiseConv2dNative(channel_multiplier = 3, kernel_size = (3, 3))
>>> output = depthwise_conv2d(input, weight)
>>> assert output.shape() == (10, 96, 30, 30)
class mindspore.ops.operations.Diag(*args, **kwargs)[source]

Construct a diagonal tensor with a given diagonal values.

Assume input_x has dimensions \([D_1,... D_k]\), the output is a tensor of rank 2k with dimensions \([D_1,..., D_k, D_1,..., D_k]\) where: \(output[i_1,..., i_k, i_1,..., i_k] = input_x[i_1,..., i_k]\) and 0 everywhere else.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor.

Examples

>>> input_x = Tensor([1, 2, 3, 4])
>>> diag = P.Diag()
>>> diag(input_x)
[[1, 0, 0, 0],
 [0, 2, 0, 0],
 [0, 0, 3, 0],
 [0, 0, 0, 4]]
class mindspore.ops.operations.DiagPart(*args, **kwargs)[source]

Extract the diagonal part from given tensor.

Assume input has dimensions \([D_1,..., D_k, D_1,..., D_k]\), the output is a tensor of rank k with dimensions \([D_1,..., D_k]\) where: \(output[i_1,..., i_k] = input[i_1,..., i_k, i_1,..., i_k]\).

Inputs:
  • input_x (Tensor) - The input Tensor.

Outputs:

Tensor.

Examples
>>> input_x = Tensor([[1, 0, 0, 0],
>>>                   [0, 2, 0, 0],
>>>                   [0, 0, 3, 0],
>>>                   [0, 0, 0, 4]])
>>> diag_part = P.DiagPart()
>>> diag_part(input_x)
[1, 2, 3, 4]
class mindspore.ops.operations.Div(*args, **kwargs)[source]

Computes the quotient of dividing the first input tensor by the second input tensor element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Raises

ValueError – When input_x and input_y are not the same dtype.

Examples

>>> input_x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
>>> input_y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> div = P.Div()
>>> div(input_x, input_y)
class mindspore.ops.operations.Dropout(*args, **kwargs)[source]

During training, randomly zeroes some of the elements of the input tensor with probability.

Parameters

drop_prob (float) – probability of an element to be zeroed. Default: 0.

Inputs:
  • shape (tuple[int]) - The shape of target mask.

Outputs:

Tensor, the value of generated mask for input shape.

Examples

>>> dropout = P.Dropout(drop_prob=0.5)
>>> in = Tensor((20, 16, 50, 50))
>>> out = dropout(in)
class mindspore.ops.operations.DropoutDoMask(*args, **kwargs)[source]

Applies dropout mask on the input tensor.

Take the mask output of DropoutGenMask as input, and apply dropout on the input.

Inputs:
  • input_x (Tensor) - The input tensor.

  • mask (Tensor) - The mask to be applied on input_x, which is the output of DropoutGenMask. And the shape of input_x must be same as the value of DropoutGenMask’s input shape. If input wrong mask, the output of DropoutDoMask are unpredictable.

  • keep_prob (Tensor) - The keep rate, between 0 and 1, e.g. keep_prob = 0.9, means dropping out 10% of input units. The value of keep_prob is same as the input keep_prob of DropoutGenMask.

Outputs:

Tensor, the value that applied dropout on.

Examples

>>> x = Tensor(np.ones([20, 16, 50]), mindspore.float32)
>>> shape = (20, 16, 50)
>>> keep_prob = Tensor(0.5, mindspore.float32)
>>> dropout_gen_mask = P.DropoutGenMask()
>>> dropout_do_mask = P.DropoutDoMask()
>>> mask = dropout_gen_mask(shape, keep_prob)
>>> output = dropout_do_mask(x, mask, keep_prob)
>>> assert output.shape() == (20, 16, 50)
class mindspore.ops.operations.DropoutGenMask(*args, **kwargs)[source]

Generates the mask value for the input shape.

Parameters
  • Seed0 (int) – Seed0 value for random generating. Default: 0.

  • Seed1 (int) – Seed1 value for random generating. Default: 0.

Inputs:
  • shape (tuple[int]) - The shape of target mask.

  • keep_prob (Tensor) - The keep rate, between 0 and 1, e.g. keep_prob = 0.9, means dropping out 10% of input units.

Outputs:

Tensor, the value of generated mask for input shape.

Examples

>>> dropout_gen_mask = P.DropoutGenMask()
>>> shape = (20, 16, 50)
>>> keep_prob = Tensor(0.5, mindspore.float32)
>>> mask = dropout_gen_mask(shape, keep_prob)
class mindspore.ops.operations.DropoutGrad(*args, **kwargs)[source]

The gradient of Dropout. During training, randomly zeroes some of the elements of the input tensor with probability.

Parameters

drop_prob (float) – probability of an element to be zeroed. Default: 0.

Inputs:
  • shape (tuple[int]) - The shape of target mask.

Outputs:

Tensor, the value of generated mask for input shape.

Examples

>>> dropout_grad = P.DropoutGrad(drop_prob=0.5)
>>> in = Tensor((20, 16, 50, 50))
>>> out = dropout_grad(in)
class mindspore.ops.operations.Elu(*args, **kwargs)[source]

Computes exponential linear: alpha * (exp(x) - 1) if x < 0, x otherwise. The data type of input tensor should be float.

Parameters

alpha (float) – The coefficient of negative factor whose type is float, only support ‘1.0’ currently. Default: 1.0.

Inputs:
  • input_x (Tensor) - The input tensor whose data type should be float.

Outputs:

Tensor, has the same shape and data type as input_x.

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> elu = P.Elu()
>>> result = elu(input_x)
Tensor([[-0.632  4.0   -0.999]
        [2.0    -0.993  9.0  ]], shape=(2, 3), dtype=mindspore.float32)
class mindspore.ops.operations.Equal(*args, **kwargs)[source]

Computes the equivalence between two tensors element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a tensor whose data type is number or bool, or a number or a bool object.

  • input_y (Union[Tensor, Number, bool]) - The second input tensor whose data type is same as ‘input_x’ or a number or a bool object.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is bool.

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> equal = P.Equal()
>>> equal(input_x, 2.0)
[False, True, False]
>>>
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal = P.Equal()
>>> equal(input_x, input_y)
[True, True, False]
class mindspore.ops.operations.EqualCount(*args, **kwargs)[source]

Computes the number of the same elements of two tensors.

The two input tensors should have same shape and same data type.

Inputs:
  • input_x (Tensor) - The first input tensor.

  • input_y (Tensor) - The second input tensor.

Outputs:

Tensor, with the type same as input tensor and size as (1,).

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> equal_count = P.EqualCount()
>>> equal_count(input_x, input_y)
[2]
class mindspore.ops.operations.Erf(*args, **kwargs)[source]

Computes the Gauss error function of input_x element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape and dtype as the input_x.

Examples

>>> input_x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erf = P.Erf()
>>> erf(input_x)
[-0.8427168, 0., 0.8427168, 0.99530876, 0.99997765]
class mindspore.ops.operations.Erfc(*args, **kwargs)[source]

Computes the complementary error function of input_x element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape and dtype as the input_x.

Examples

>>> input_x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
>>> erfc = P.Erfc()
>>> erfc(input_x)
[1.8427168, 0., 0.1572832, 0.00469124, 0.00002235]
class mindspore.ops.operations.Exp(*args, **kwargs)[source]

Returns exponential of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape as the input_x.

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> exp = P.Exp()
>>> exp(input_x)
[ 2.71828183,  7.3890561 , 54.59815003]
class mindspore.ops.operations.ExpandDims(*args, **kwargs)[source]

Adds an additional dimension at the given axis.

Note

If the specified axis is a negative number, the index is counted backward from the end and starts at 1.

Raises

ValueError – If axis is not an integer or not in the valid range.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • axis (int) - Specifies the dimension index at which to expand the shape of input_x. The value of axis must be in the range [-input_x.dim()-1, input_x.dim()]. Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is \((1, x_1, x_2, ..., x_R)\) if the value of axis is 0.

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> expand_dims = P.ExpandDims()
>>> output = expand_dims(input_tensor, 0)
class mindspore.ops.operations.Eye(*args, **kwargs)[source]

Creates a tensor with ones on the diagonal and zeros elsewhere.

Inputs:
  • n (int) - Number of rows of returned tensor

  • m (int) - Number of columns of returned tensor

  • t (mindspore.dtype) - Mindspore’s dtype, The data type of the returned tensor.

Outputs:

Tensor, a tensor with ones on the diagonal and zeros elsewhere.

Examples

>>> eye = P.Eye()
>>> out_tensor = eye(2, 2, mindspore.int32)
class mindspore.ops.operations.FakeQuantWithMinMax(*args, **kwargs)[source]

Simulate the quantize and dequantize operations in training time.

Parameters
  • num_bits (int) – Number bits for aware quantilization. Default: 8.

  • ema (bool) – Use EMA algorithm update value min and max. Default: False.

  • ema_decay (int) – EMA algorithm decay parameter. Default: 0.999.

  • quant_delay (int) – Quantilization delay parameter. Before delay step in training time not update simulate aware quantize funcion. After delay step in training time begin simulate the aware quantize funcion. Default: 0.

  • symmetric (bool) – Quantization algorithm use symmetric or not. Default: False.

  • narrow_range (bool) – Quantization algorithm use narrow range or not. Default: False.

  • training (bool) – Training the network or not. Default: True.

Inputs:
  • x (Tensor) : float32 Tensor representing the shape of the output tensor.

  • min (Tensor) : Value of the min range of the input data x.

  • max (Tensor) : Value of the max range of the input data x.

Outputs:
  • Tensor: Simulate quantize tensor of x.

Examples

>>> input_tensor = Tensor(np.random.rand(3, 16, 5, 5), mstype.float32)
>>> min_tensor = Tensor(np.array([-6]), mstype.float32)
>>> max_tensor = Tensor(np.array([6]), mstype.float32)
>>> output_tensor = P.FakeQuantWithMinMax(num_bits=8)(input_tensor, min_tensor, max_tensor)
class mindspore.ops.operations.FakeQuantWithMinMaxGrad(*args, **kwargs)[source]

Performs grad of FakeQuantWithMinMax operation.

Examples

>>> fake_min_max_grad = P.FakeQuantWithMinMaxGrad()
>>> dout = Tensor(np.array([[-2.3, 1.2], [5.7, 0.2]]), mindspore.float32)
>>> input_x = Tensor(np.array([[18, -23], [0.2, 6]]), mindspore.float32)
>>> _min = Tensor(np.array([-4]), mindspore.float32)
>>> _max = Tensor(np.array([2]), mindspore.float32)
>>> result = fake_min_max_grad(dout, input_x, _min, _max)
class mindspore.ops.operations.FakeQuantWithMinMaxPerChannel(*args, **kwargs)[source]

Simulate the quantize and dequantize operations in training time base on per channel.

Parameters
  • num_bits (int) – Number bits to quantilization. Default: 8.

  • ema (bool) – Use EMA algorithm update tensor min and tensor max. Default: False.

  • ema_decay (int) – EMA algorithm decay parameter. Default: 0.999.

  • quant_delay (int) – Quantilization delay parameter. Before delay step in training time not update the weight data to simulate quantize operation. After delay step in training time begin simulate the quantize operation. Default: 0.

  • symmetric (bool) – Quantization algorithm use symmetric or not. Default: False.

  • narrow_range (bool) – Quantization algorithm use narrow range or not. Default: False.

  • training (bool) – Training the network or not. Default: True.

Inputs:
  • x (Tensor) : 4-D float32 Tensor representing the shape of the output tensor.

  • min (int, float) : Value of the min range of the input data.

  • max (int, float) : Value of the max range of the input data.

Outputs:
  • Tensor, has the same type as input.

Examples

>>> fake_quant = P.FakeQuantWithMinMaxPerChannel()
>>> input_x = Tensor(np.array([3, 4, 5, -2, -3, -1]).reshape(3, 2), mindspore.float32)
>>> _min = Tensor(np.linspace(-2, 2, 12).reshape(3, 2, 2), mindspore.float32)
>>> _max = Tensor(np.linspace(8, 12, 12).reshape(3, 2, 2), mindspore.float32)
>>> result = fake_quant(input_x, _min, _max)
class mindspore.ops.operations.FakeQuantWithMinMaxPerChannelGrad(*args, **kwargs)[source]

Performs grad of FakeQuantWithMinMaxPerChannel operation.

Examples

>>> fqmmpc_grad = P.FakeQuantWithMinMaxPerChannelGrad()
>>> input_x = Tensor(np.random.randint(-4, 4, (2, 3, 4)), mindspore.float32)
>>> dout = Tensor(np.random.randint(-2, 2, (2, 3, 4)), mindspore.float32)
>>> _min = Tensor(np.random.randint(-8, 2, (2, 3, 4)), mindspore.float32)
>>> _max = Tensor(np.random.randint(-2, 8, (2, 3, 4)), mindspore.float32)
>>> result = fqmmpc_grad(dout, input_x, _min, _max)
class mindspore.ops.operations.FakeQuantWithMinMaxUpdate(*args, **kwargs)[source]

Simulate the quantize and dequantize operations in training time.

Parameters
  • num_bits (int) – Number bits for aware quantilization. Default: 8.

  • ema (bool) – Use EMA algorithm update value min and max. Default: False.

  • ema_decay (int) – EMA algorithm decay parameter. Default: 0.999.

  • quant_delay (int) – Quantilization delay parameter. Before delay step in training time not update simulate aware quantize funcion. After delay step in training time begin simulate the aware quantize funcion. Default: 0.

  • symmetric (bool) – Quantization algorithm use symmetric or not. Default: False.

  • narrow_range (bool) – Quantization algorithm use narrow range or not. Default: False.

  • training (bool) – Training the network or not. Default: True.

Inputs:
  • x (Tensor) : float32 Tensor representing the shape of the output tensor.

  • min (Tensor) : Value of the min range of the input data x.

  • max (Tensor) : Value of the max range of the input data x.

Outputs:
  • Tensor: Simulate quantize tensor of x.

Examples

>>> input_tensor = Tensor(np.random.rand(3, 16, 5, 5), mstype.float32)
>>> min_tensor = Tensor(np.array([-6]), mstype.float32)
>>> max_tensor = Tensor(np.array([6]), mstype.float32)
>>> output_tensor = P.FakeQuantWithMinMax(num_bits=8)(input_tensor, min_tensor, max_tensor)
class mindspore.ops.operations.Fill(*args, **kwargs)[source]

Creates a tensor filled with a scalar value.

Creates a tensor with shape described by the first argument and fills it with values in the second argument.

Inputs:
  • type (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.

  • shape (tuple) - The specified shape of output tensor. Only constant value is allowed.

  • value (scalar) - Value to fill the returned tensor. Only constant value is allowed.

Outputs:

Tensor, has the same type and shape as input value.

Examples

>>> fill = P.Fill()
>>> fill(mindspore.float32, (2, 2), 1)
class mindspore.ops.operations.Flatten(*args, **kwargs)[source]

Flattens a tensor without changing its batch size on the 0-th axis.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, \ldots)\) to be flattened.

Outputs:

Tensor, the shape of the output tensor is \((N, X)\), where \(X\) is the product of the remaining dimension.

Examples

>>> input_tensor = Tensor(np.ones(shape=[1, 2, 3, 4]), mindspore.float32)
>>> flatten = P.Flatten()
>>> output = flatten(input_tensor)
>>> assert output.shape() == (1, 24)
class mindspore.ops.operations.FloatStatus(*args, **kwargs)[source]

Determine if the elements contains nan, inf or -inf. 0 for normal, 1 for overflow.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the shape of (1,), and has the same dtype of input mindspore.dtype.float32 or mindspore.dtype.float16.

Examples

>>> float_status = P.FloatStatus()
>>> input_x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> result = float_status(input_x)
class mindspore.ops.operations.Floor(*args, **kwargs)[source]

Round a tensor down to the closest integer element-wise.

Inputs:
  • input_x (Tensor) - The input tensor. Its element data type must be float.

Outputs:

Tensor, has the same shape as input_x.

Examples

>>> input_x = Tensor(np.array([1.1, 2.5, -1.5]), mindspore.float32)
>>> floor = P.Floor()
>>> floor(input_x)
[1.0, 2.0, -2.0]
class mindspore.ops.operations.FloorDiv(*args, **kwargs)[source]

Divide the first input tensor by the second input tensor element-wise and rounds down to the closest integer.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Examples

>>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> input_y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_div = P.FloorDiv()
>>> floor_div(input_x, input_y)
[0, 1, -1]
class mindspore.ops.operations.FloorMod(*args, **kwargs)[source]

Compute element-wise remainder of division.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Examples

>>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32)
>>> input_y = Tensor(np.array([3, 3, 3]), mindspore.int32)
>>> floor_mod = P.FloorMod()
>>> floor_mod(input_x, input_y)
[2, 1, 2]
class mindspore.ops.operations.FusedBatchNorm(*args, **kwargs)[source]

FusedBatchNorm is a BatchNorm that moving mean and moving variance will be computed instead of being loaded.

Batch Normalization is widely used in convolutional networks. This operation applies Batch Normalization over input to avoid internal covariate shift as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. It rescales and recenters the feature using a mini-batch of data and the learned parameters which can be described in the following formula.

\[y = \frac{x - mean}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters
  • mode (int) – Mode of batch normalization, value is 0 or 1. Default: 0.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-5.

  • momentum (float) – The hyper parameter to compute moving average for running_mean and running_var (e.g. \(new\_running\_mean = momentum * running\_mean + (1 - momentum) * current\_mean\)). Momentum value should be [0, 1]. Default: 0.9.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, C)\).

  • scale (Tensor) - Tensor of shape \((C,)\).

  • bias (Tensor) - Tensor of shape \((C,)\).

  • mean (Tensor) - Tensor of shape \((C,)\).

  • variance (Tensor) - Tensor of shape \((C,)\).

Outputs:

Tuple of 5 Tensor, the normalized input and the updated parameters.

  • output_x (Tensor) - The same type and shape as the input_x.

  • updated_scale (Tensor) - Tensor of shape \((C,)\).

  • updated_bias (Tensor) - Tensor of shape \((C,)\).

  • updated_moving_mean (Tensor) - Tensor of shape \((C,)\).

  • updated_moving_variance (Tensor) - Tensor of shape \((C,)\).

Examples

>>> input_x = Tensor(np.ones([128, 64, 32, 64]), mindspore.float32)
>>> scale = Tensor(np.ones([64]), mindspore.float32)
>>> bias = Tensor(np.ones([64]), mindspore.float32)
>>> mean = Tensor(np.ones([64]), mindspore.float32)
>>> variance = Tensor(np.ones([64]), mindspore.float32)
>>> op = P.FusedBatchNorm()
>>> output = op(input_x, scale, bias, mean, variance)
class mindspore.ops.operations.GatherNd(*args, **kwargs)[source]

Gathers slices from a tensor by indices.

Using given indices to gather slices from a tensor with a specified shape.

Inputs:
  • input_x (Tensor) - The target tensor to gather values.

  • indices (Tensor) - The index tensor.

Outputs:

Tensor, has the same type as input_x and the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].

Examples

>>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> op = P.GatherNd()
>>> output = op(input_x, indices)
class mindspore.ops.operations.GatherV2(*args, **kwargs)[source]

Returns a slice of input tensor based on the specified indices and axis.

Inputs:
  • input_params (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\). The original Tensor.

  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. Must be in the range [0, input_param.shape()[axis]).

  • axis (int) - Specifies the dimension index to gather indices.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Examples

>>> input_params = Tensor(np.array([[1, 2, 7, 42], [3, 4, 54, 22], [2, 2, 55, 3]]), mindspore.float32)
>>> input_indices = Tensor(np.array([1, 2]), mindspore.int32)
>>> axis = 1
>>> out = P.GatherV2()(input_params, input_indices, axis)
class mindspore.ops.operations.GeSwitch(*args, **kwargs)[source]

Adds control switch to data.

Switch data to flow into false or true branch depend on the condition. If the condition is true, the true branch will be activated, or vise verse.

Inputs:
  • data (Union[Tensor, Number]) - The data to be used for switch control.

  • pred (Tensor) - It should be a scalar whose type is bool and shape is (), It is used as condition for switch control.

Outputs:

tuple. Output is tuple(false_output, true_output). The Elements in the tuple has the same shape of input data. The false_output connects with the false_branch and the true_output connects with the true_branch.

Examples

>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.square = P.Square()
>>>         self.add = P.TensorAdd()
>>>         self.value = Tensor(np.full((1), 3), mindspore.float32)
>>>         self.switch = P.GeSwitch()
>>>         self.merge = P.Merge()
>>>         self.less = P.Less()
>>>
>>>     def construct(self, x, y):
>>>         cond = self.less(x, y)
>>>         st1, sf1 = self.switch(x, cond)
>>>         st2, sf2 = self.switch(y, cond)
>>>         add_ret = self.add(st1, st2)
>>>         st3, sf3 = self.switch(self.value, cond)
>>>         sq_ret = self.square(sf3)
>>>         ret = self.merge((add_ret, sq_ret))
>>>         return ret[0]
>>>
>>> x = Tensor(10.0, dtype=mindspore.float32)
>>> y = Tensor(5.0, dtype=mindspore.float32)
>>> net = Net()
>>> output = net(x, y)
class mindspore.ops.operations.Gelu(*args, **kwargs)[source]

Gaussian Error Linear Units activation function.

GeLU is described in the paper Gaussian Error Linear Units (GELUs). And also please refer to BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding..

Defined as follows:

\[\text{output} = 0.5 * x * (1 + erf(x / \sqrt{2})),\]

where \(erf\) is the “Gauss error function” .

Inputs:
  • input_x (Tensor) - Input to compute the Gelu.

Outputs:

Tensor, with the same type and shape as input.

Examples

>>> tensor = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> gelu = P.Gelu()
>>> result = gelu(tensor)
class mindspore.ops.operations.GetNext(*args, **kwargs)[source]

Returns the next element in the dataset queue.

Note

GetNext op needs to be associated with network and also depends on the init_dataset interface, it can’t be used directly as a single op. For details, please refer to nn.DataWrapper source code.

Parameters
  • types (list[mindspore.dtype]) – The type of the outputs.

  • shapes (list[tuple[int]]) – The dimensionality of the outputs.

  • output_num (int) – The output number, length of types and shapes.

  • shared_name (str) – The queue name of init_dataset interface.

Inputs:

No inputs.

Outputs:

tuple[Tensor], the output of Dataset. The shape is described in shapes and the type is described is types.

Examples

>>> get_next = P.GetNext([mindspore.float32, mindspore.int32], [[32, 1, 28, 28], [10]], 2, 'shared_name')
>>> feature, label = get_next()
class mindspore.ops.operations.Greater(*args, **kwargs)[source]

Computes the boolean value of \(x > y\) element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as input_x or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is bool.

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater = P.Greater()
>>> greater(input_x, input_y)
[False, True, False]
class mindspore.ops.operations.GreaterEqual(*args, **kwargs)[source]

Computes the boolean value of \(x >= y\) element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as input_x or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is bool.

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> greater_equal = P.GreaterEqual()
>>> greater_equal(input_x, input_y)
[True, True, False]
class mindspore.ops.operations.HSigmoid(*args, **kwargs)[source]

Hard sigmoid activation function.

Applies hard sigmoid activation element-wise. The input is a Tensor with any valid shape.

Hard sigmoid is defined as:

\[\text{hsigmoid}(x_{i}) = max(0, min(1, \frac{2 * x_{i} + 5}{10})),\]

where \(x_{i}\) is the \(i\)-th slice along the given dim of the input Tensor.

Inputs:
  • input_data (Tensor) - The input of HSigmoid.

Outputs:

Tensor, with the same type and shape as the input_data.

Examples

>>> hsigmoid = P.HSigmoid()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hsigmoid(input_x)
class mindspore.ops.operations.HSwish(*args, **kwargs)[source]

Hard swish activation function.

Applies hswish-type activation element-wise. The input is a Tensor with any valid shape.

Hard swish is defined as:

\[\text{hswish}(x_{i}) = x_{i} * \frac{ReLU6(x_{i} + 3)}{6},\]

where \(x_{i}\) is the \(i\)-th slice along the given dim of the input Tensor.

Inputs:
  • input_data (Tensor) - The input of HSwish.

Outputs:

Tensor, with the same type and shape as the input_data.

Examples

>>> hswish = P.HSwish()
>>> input_x = Tensor(np.array([-1, -2, 0, 2, 1]), mindspore.float16)
>>> result = hswish(input_x)
class mindspore.ops.operations.HistogramSummary(*args, **kwargs)[source]

Output tensor to protocol buffer through histogram summary operator.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor, and the rank of tensor should be greater than 0.

Examples

>>> class SummaryDemo(nn.Cell):
>>>     def __init__(self,):
>>>         super(SummaryDemo, self).__init__()
>>>         self.summary = P.HistogramSummary()
>>>         self.add = P.TensorAdd()
>>>
>>>     def construct(self, x, y):
>>>         x = self.add(x, y)
>>>         name = "x"
>>>         self.summary(name, x)
>>>         return x
class mindspore.ops.operations.HookBackward(hook_fn, cell_id='')[source]

Used as tag to hook gradient in intermediate variables.

Note

The hook function should have one input of gradient of the variable. hook function will be executed in python environment, while callback of InsertGradientOf will be parsed and added to the graph.

Parameters

hook_fn (Function) – Python function. hook function.

Inputs:
  • inputs (Tensor) - The variable to hook.

Examples

>>> def hook_fn(grad_out):
>>>     print(grad_out)
>>>
>>> hook = P.HookBackward(hook_fn)
>>>
>>> def hook_test(x, y):
>>>     z = x * y
>>>     z = hook(z)
>>>     z = z * y
>>>     return z
>>>
>>> def backward(x, y):
>>>     return C.grad_all(hook_test)(x, y)
>>>
>>> backward(1, 2)
class mindspore.ops.operations.IOU(*args, **kwargs)[source]

Calculate intersection over union for boxes.

Compute the intersection over union (IOU) or the intersection over foreground (IOF) based on the ground-truth and predicted regions.

\[ \begin{align}\begin{aligned}\text{IOU} = \frac{\text{Area of Overlap}}{\text{Area of Union}}\\\text{IOF} = \frac{\text{Area of Overlap}}{\text{Area of Ground Truth}}\end{aligned}\end{align} \]
Parameters

mode (string) – The mode is used to specify the calculation method, now support ‘iou’ (intersection over union) or ‘iof’ (intersection over foreground) mode. Default: ‘iou’.

Inputs:
  • anchor_boxes (Tensor) - Anchor boxes, tensor of shape (N, 4). “N” indicates the number of anchor boxes, and the value “4” refers to “x0”, “x1”, “y0”, and “y1”.

  • gt_boxes (Tensor) - Ground truth boxes, tensor of shape (M, 4). “M” indicates the number of ground truth boxes, and the value “4” refers to “x0”, “x1”, “y0”, and “y1”.

Outputs:

Tensor, the ‘iou’ values, tensor of shape (M, N).

Raises

KeyError – When mode is not ‘iou’ or ‘iof’.

Examples

>>> iou = P.IOU()
>>> anchor_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float32)
>>> gt_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float32)
>>> iou(anchor_boxes, gt_boxes)
class mindspore.ops.operations.ImageSummary(*args, **kwargs)[source]

Output image tensor to protocol buffer through image summary operator.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of image.

Examples

>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.summary = P.ImageSummary()
>>>
>>>     def construct(self, x):
>>>         name = "image"
>>>         out = self.summary(name, x)
>>>         return out
class mindspore.ops.operations.InsertGradientOf(*args, **kwargs)[source]

Attach callback to graph node that will be invoked on the node’s gradient.

Parameters

f (Function) – MindSpore’s Function. Callback function.

Inputs:
  • input_x (Tensor) - The graph node to attach to.

Outputs:

Tensor, returns input_x directly. InsertGradientOf does not affect the forward result.

Examples

>>> def clip_gradient(dx):
>>>     ret = dx
>>>     if ret > 1.0:
>>>         ret = 1.0
>>>
>>>     if ret < 0.2:
>>>         ret = 0.2
>>>
>>>     return ret
>>>
>>> clip = P.InsertGradientOf(clip_gradient)
>>> grad_all = C.GradOperation('get_all', get_all=True)
>>> def InsertGradientOfClipDemo():
>>>     def clip_test(x, y):
>>>         x = clip(x)
>>>         y = clip(y)
>>>         c = x * y
>>>         return c
>>>
>>>     @ms_function
>>>     def f(x, y):
>>>         return clip_test(x, y)
>>>
>>>     def fd(x, y):
>>>         return grad_all(clip_test)(x, y)
>>>
>>>     print("forward: ", f(1.1, 0.1))
>>>     print("clip_gradient:", fd(1.1, 0.1))
class mindspore.ops.operations.InvertPermutation(*args, **kwargs)[source]

Computes the inverse of an index permutation.

Given a tuple input, this operation inserts a dimension of 1 at the dimension This operation calculates the inverse of the index replacement. It requires a 1-dimensional tuple x, which represents the array starting at zero, and swaps each value with its index position. In other words, for the output tuple y and the input tuple x, this operation calculates the following: \(y[x[i]] = i, \quad i \in [0, 1, \ldots, \text{len}(x)-1]\).

Note

These values must include 0. There must be no duplicate values and the values can not be negative.

Inputs:
  • input_x (Union(tuple[int], Tensor[int])) - The input tuple is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\) representing the indices. The values must include 0. There can be no duplicate values or negative values. If the input is Tensor, it must be 1-d and the dtype is int.

Outputs:

tuple[int]. the lenth is same as input.

Examples

>>> invert = P.InvertPermutation()
>>> input_data = (3, 4, 0, 2, 1)
>>> output = invert(input_data)
>>> output == (2, 4, 3, 0, 1)
class mindspore.ops.operations.IsFinite(*args, **kwargs)[source]

Judging which elements are finite for each position

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape of input, and the dtype is bool.

Examples

>>> is_finite = P.IsFinite()
>>> input_x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> result = is_finite(input_x)
[False   True   False]
class mindspore.ops.operations.IsInf(*args, **kwargs)[source]

Judging which elements are inf or -inf for each position

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape of input, and the dtype is bool.

Examples

>>> is_inf = P.IsInf()
>>> input_x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> result = is_inf(input_x)
class mindspore.ops.operations.IsInstance(*args, **kwargs)[source]

Check whether an object is an instance of a target type.

Inputs:
  • inst (Any Object) - The instance to be check. Only constant value is allowed.

  • type_ (mindspore.dtype) - The target type. Only constant value is allowed.

Outputs:

bool, the check result.

Examples

>>> a = 1
>>> result = P.IsInstance()(a, mindspore.int32)
class mindspore.ops.operations.IsNan(*args, **kwargs)[source]

Judging which elements are nan for each position

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape of input, and the dtype is bool.

Examples

>>> is_nan = P.IsNan()
>>> input_x = Tensor(np.array([np.log(-1), 1, np.log(0)]), mindspore.float32)
>>> result = is_nan(input_x)
class mindspore.ops.operations.IsSubClass(*args, **kwargs)[source]

Check whether one type is sub class of another type.

Inputs:
  • sub_type (mindspore.dtype) - The type to be check. Only constant value is allowed.

  • type_ (mindspore.dtype) - The target type. Only constant value is allowed.

Outputs:

bool, the check result.

Examples

>>> result = P.IsSubClass()(mindspore.int32,  mindspore.intc)
class mindspore.ops.operations.L2Loss(*args, **kwargs)[source]

Calculates half of the L2 norm of a tensor without using the sqrt.

Set input_x as x and output as loss.

\[loss = sum(x ** 2) / 2\]
Inputs:
  • input_x (Tensor) - A input Tensor.

Outputs:

Tensor. Has the same dtype as input_x. The output tensor is the value of loss which is a scalar tensor.

Examples
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float16)
>>> l2_loss = P.L2Loss()
>>> l2_loss(input_x)
7.0
class mindspore.ops.operations.L2Normalize(*args, **kwargs)[source]

L2 normalization Operator.

This operator will normalizes the input using the given axis. The function is shown as follows:

\[\text{output} = \frac{x}{\sqrt{\text{max}(\text{sum} (\text{input_x}^2), \epsilon)}},\]

where \(\epsilon\) is epsilon.

Parameters
  • axis (int) – The begin axis for the input to apply L2 normalize. Default: 0.

  • epsilon (float) – A small value added for numerical stability. Default: 1e-4.

Inputs:
  • input_x (Tensor) - Input to compute the normalization.

Outputs:

Tensor, with the same type and shape as the input.

Examples

>>> l2_normalize = P.L2Normalize()
>>> input_x = Tensor(np.random.randint(-256, 256, (2, 3, 4)), mindspore.float32)
>>> result = l2_normalize(input_x)
[[[-0.47247353   -0.30934513   -0.4991462   0.8185567 ]
  [-0.08070751   -0.9961299    -0.5741758   0.09262337]
  [-0.9916556    -0.3049123     0.5730487  -0.40579924]
[[-0.88134485 0.9509498 -0.86651784 0.57442576]

[ 0.99673784 0.08789381 -0.8187321 0.9957012 ] [ 0.12891524 -0.9523804 -0.81952125 0.91396334]]]

class mindspore.ops.operations.LARSUpdate(*args, **kwargs)[source]

Conduct lars (layer-wise adaptive rate scaling) update on the square sum of gradient.

Parameters
  • epsilon (float) – Term added to the denominator to improve numerical stability. Default: 1e-05.

  • hyperpara (float) – Trust coefficient for calculating the local learning rate. Default: 0.001.

  • use_clip (bool) – Whether to use clip operation for calculating the local learning rate. Default: False.

Inputs:
  • weight (Tensor) - The weight to be updated.

  • gradient (Tensor) - The gradient of weight, which has the same shape and dtype with weight.

  • norm_weight (Tensor) - A scalar tensor, representing the square sum of weight.

  • norm_gradient (Tensor) - A scalar tensor, representing the square sum of gradient.

  • weight_decay (Union[Number, Tensor]) - Weight decay. It should be a scalar tensor or number.

  • learning_rate (Union[Number, Tensor]) - Learning rate. It should be a scalar tensor or number.

Outputs:

Tensor, representing the new gradient.

Examples

>>> from mindspore import Tensor
>>> from mindspore.ops import operations as P
>>> from mindspore.ops import functional as F
>>> import mindspore.nn as nn
>>> import numpy as np
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.lars = P.LARSUpdate()
>>>         self.reduce = P.ReduceSum()
>>>     def construct(self, weight, gradient):
>>>         w_square_sum = self.reduce(F.square(weight))
>>>         grad_square_sum = self.reduce(F.square(gradient))
>>>         grad_t = self.lars(weight, gradient, w_square_sum, grad_square_sum, 0.0, 1.0)
>>>         return grad_t
>>> weight = np.random.random(size=(2, 3)).astype(np.float32)
>>> gradient = np.random.random(size=(2, 3)).astype(np.float32)
>>> net = Net()
>>> ms_output = net(Tensor(weight), Tensor(gradient))
class mindspore.ops.operations.LSTM(*args, **kwargs)[source]

Performs the long short term memory(LSTM) on the input.

Detailed information, please refer to nn.LSTM.

class mindspore.ops.operations.LayerNorm(*args, **kwargs)[source]

Applies the Layer Normalization to the input tensor.

This operator will normalize the input tensor on given axis. LayerNorm is described in the paper Layer Normalization.

\[y = \frac{x - mean]}{\sqrt{variance + \epsilon}} * \gamma + \beta\]

where \(\gamma\) is scale, \(\beta\) is bias, \(\epsilon\) is epsilon.

Parameters
  • begin_norm_axis (int) – The begin axis of the input_x to apply LayerNorm, the value should be in [-1, rank(input)). Default: 1.

  • begin_params_axis (int) – The begin axis of the parameter input (gamma, beta) to apply LayerNorm, the value should be in [-1, rank(input)). Default: 1.

  • epsilon (float) – A value added to the denominator for numerical stability. Default: 1e-7.

Inputs:
  • input_x (Tensor) - Tensor of shape \((N, \ldots)\). The input of LayerNorm.

  • gamma (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter gamma as the scale on norm.

  • beta (Tensor) - Tensor of shape \((P_0, \ldots, P_\text{begin_params_axis})\). The learnable parameter beta as the scale on norm.

Outputs:

tuple[Tensor], tuple of 3 tensors, the normalized input and the updated parameters.

  • output_x (Tensor) - The normalized input, has the same type and shape as the input_x. The shape is \((N, C)\).

  • mean (Tensor) - Tensor of shape \((C,)\).

  • variance (Tensor) - Tensor of shape \((C,)\).

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [1, 2, 3]]), mindspore.float32)
>>> gamma = Tensor(np.ones([3]), mindspore.float32)
>>> beta = Tensor(np.ones([3]), mindspore.float32)
>>> layer_norm = P.LayerNorm()
>>> output = layer_norm(input_x, gamma, beta)
([[-0.22474492, 1., 2.2247488], [-0.22474492, 1., 2.2247488]],
 [[2.], [2.]], [[0.6666667], [0.6666667]])
class mindspore.ops.operations.Less(*args, **kwargs)[source]

Computes the boolean value of \(x < y\) element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as input_x or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is bool.

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less = P.Less()
>>> less(input_x, input_y)
[False, False, True]
class mindspore.ops.operations.LessEqual(*args, **kwargs)[source]

Computes the boolean value of \(x <= y\) element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as input_x or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is bool.

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 1, 4]), mindspore.int32)
>>> less_equal = P.LessEqual()
>>> less_equal(input_x, input_y)
[True, False, True]
class mindspore.ops.operations.Log(*args, **kwargs)[source]

Returns the natural logarithm of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape as the input_x.

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log = P.Log()
>>> log(input_x)
[0.0, 0.69314718, 1.38629436]
class mindspore.ops.operations.Log1p(*args, **kwargs)[source]

Returns the natural logarithm of one plus the input tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape as the input_x.

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> log1p = P.Log1p()
>>> log1p(input_x)
[0.6931472, 1.0986123, 1.609438]
class mindspore.ops.operations.LogSoftmax(*args, **kwargs)[source]

Log Softmax activation function.

Applies the Log Softmax function to the input tensor on the specified axis. Suppose a slice along the given aixs \(x\) then for each element \(x_i\) the Log Softmax function is shown as follows:

\[\text{output}(x_i) = \log \left(\frac{exp(x_i)} {\sum_{j = 0}^{N-1}\exp(x_j)}\right),\]

where \(N\) is the length of the Tensor.

Parameters

axis (int) – The axis to do the Log softmax operation. Default: -1.

Inputs:
  • logits (Tensor) - The input of Log Softmax.

Outputs:

Tensor, with the same type and shape as the logits.

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> log_softmax = P.LogSoftmax()
>>> log_softmax(input_x)
[-4.4519143, -3.4519143, -2.4519143, -1.4519144, -0.4519144]
class mindspore.ops.operations.LogicalAnd(*args, **kwargs)[source]

Computes the “logical AND” of two tensors element-wise.

The inputs must be two tensors or one tensor and one bool object. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be bool. When the inputs are one tensor and one bool object, the bool object cannot be a parameter, only can be a constant, and the data type of the tensor should be bool.

Inputs:
  • input_x (Union[Tensor, bool]) - The first input is a tensor whose data type is bool or a bool object.

  • input_y (Union[Tensor, bool]) - The second input is a tensor whose data type is bool or a bool object.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is bool.

Examples

>>> input_x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> input_y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_and = P.LogicalAnd()
>>> logical_and(input_x, input_y)
[True, False, False]
class mindspore.ops.operations.LogicalNot(*args, **kwargs)[source]

Computes the “logical NOT” of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is bool.

Outputs:

Tensor, the shape is same as the input_x, and the dtype is bool.

Examples

>>> input_x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> logical_not = P.LogicalNot()
>>> logical_not(input_x)
[False, True, False]
class mindspore.ops.operations.LogicalOr(*args, **kwargs)[source]

Computes the “logical OR” of two tensors element-wise.

The inputs must be two tensors or one tensor and one bool object. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be bool. When the inputs are one tensor and one bool object, the bool object cannot be a parameter, only can be a constant, and the data type of the tensor should be bool.

Inputs:
  • input_x (Union[Tensor, bool]) - The first input is a tensor whose data type is bool or a bool object.

  • input_y (Union[Tensor, bool]) - The second input is a tensor whose data type is bool or a bool object.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is bool.

Examples

>>> input_x = Tensor(np.array([True, False, True]), mindspore.bool_)
>>> input_y = Tensor(np.array([True, True, False]), mindspore.bool_)
>>> logical_or = P.LogicalOr()
>>> logical_or(input_x, input_y)
[True, True, True]
class mindspore.ops.operations.MakeRefKey(*args, **kwargs)[source]

Make a RefKey instance by string. RefKey stores the name of Parameter, can be passed through the functions, and used for Assign target.

Parameters

tag (str) – Parameter name to make the RefKey.

Inputs:

No input.

Outputs:

RefKeyType, made from the Parameter name.

Examples

>>> from mindspore.ops import functional as F
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.y = mindspore.Parameter(Tensor(np.ones([6, 8, 10]), mindspore.int32), name="y")
>>>         self.make_ref_key = P.MakeRefKey("y")
>>>
>>>     def construct(self, x):
>>>         key = self.make_ref_key()
>>>         ref = F.make_ref(key, x, self.y)
>>>         return ref * x
>>>
>>> x = Tensor(np.ones([3, 4, 5]), mindspore.int32)
>>> net = Net()
>>> net(x)
class mindspore.ops.operations.MatMul(*args, **kwargs)[source]

Multiplies matrix a by matrix b.

The rank of input tensors must be 2.

Parameters
  • transpose_a (bool) – If True, a is transposed before multiplication. Default: False.

  • transpose_b (bool) – If True, b is transposed before multiplication. Default: False.

Inputs:
  • input_x (Tensor) - The first tensor to be multiplied. The shape of the tensor is \((N, C)\). If transpose_a is True, its shape should be \((N, C)\) after transposing.

  • input_y (Tensor) - The second tensor to be multiplied. The shape of the tensor is \((C, M)\). If transpose_b is True, its shape should be \((C, M)\) after transpose.

Outputs:

Tensor, the shape of the output tensor is \((N, M)\).

Examples

>>> input_x = Tensor(np.ones(shape=[1, 3]), mindspore.float32)
>>> input_y = Tensor(np.ones(shape=[3, 4]), mindspore.float32)
>>> matmul = P.MatMul()
>>> output = matmul(input_x, input_y)
class mindspore.ops.operations.MaxPool(*args, **kwargs)[source]

Max pooling operation.

Applies a 2D max pooling over an input Tensor which can be regarded as a composition of 2D planes.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters
  • ksize (Union[int, tuple[int]]) – The size of kernel used to take the maximum value, is an int number that represents height and width are both ksize, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • padding (str) –

    The optional values for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. Output height and width will be the same as the input. Total number of padding will be calculated for horizontal and vertical direction and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possibly largest height and width of output will be return without padding. Extra pixels will be discarded.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tensor, with shape \((N, C_{out}, H_{out}, W_{out})\).

Examples

>>> input_tensor = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_op = P.MaxPool(padding="VALID", ksize=2, strides=1)
>>> output_tensor = maxpool_op(input_tensor)
class mindspore.ops.operations.MaxPoolWithArgmax(ksize=1, strides=1, padding='valid')[source]

Performs max pooling on the input Tensor and return both max values and indices.

Typically the input is of shape \((N_{in}, C_{in}, H_{in}, W_{in})\), MaxPool outputs regional maximum in the \((H_{in}, W_{in})\)-dimension. Given kernel size \(ks = (h_{ker}, w_{ker})\) and stride \(s = (s_0, s_1)\), the operation is as follows.

\[\text{output}(N_i, C_j, h, w) = \max_{m=0, \ldots, h_{ker}-1} \max_{n=0, \ldots, w_{ker}-1} \text{input}(N_i, C_j, s_0 \times h + m, s_1 \times w + n)\]
Parameters
  • ksize (Union[int, tuple[int]]) – The size of kernel used to take the maximum value and arg value, is an int number that represents height and width are both ksize, or a tuple of two int numbers that represent height and width respectively. Default: 1.

  • strides (Union[int, tuple[int]]) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • padding (str) –

    The optional values for pad mode, is “same” or “valid”, not case sensitive. Default: “valid”.

    • same: Adopts the way of completion. Output height and width will be the same as the input. Total number of padding will be calculated for horizontal and vertical direction and evenly distributed to top and bottom, left and right if possible. Otherwise, the last extra padding will be done from the bottom and the right side.

    • valid: Adopts the way of discarding. The possibly largest height and width of output will be return without padding. Extra pixels will be discarded.

Inputs:
  • input (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\).

Outputs:

Tuple of 2 Tensor, the maxpool result and where max values from.

  • output (Tensor) - Maxpooling result, with shape \((N, C_{out}, H_{out}, W_{out})\).

  • mask (Tensor) - Max values’ index represented by the mask.

Examples

>>> input_tensor = Tensor(np.arange(1 * 3 * 3 * 4).reshape((1, 3, 3, 4)), mindspore.float32)
>>> maxpool_arg_op = P.MaxPoolWithArgmax(padding="VALID", ksize=2, strides=1)
>>> output_tensor, argmax = maxpool_arg_op(input_tensor)
class mindspore.ops.operations.Maximum(*args, **kwargs)[source]

Computes the element-wise maximum of input tensors.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Examples

>>> input_x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> input_y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> maximum = P.Maximum()
>>> maximum(input_x, input_y)
[4.0, 5.0, 6.0]
class mindspore.ops.operations.Merge(*args, **kwargs)[source]

Merges all input data to one.

One and only one of the inputs should be selected as the output

Inputs:
  • inputs (Tuple) - The data to be merged. All tuple elements should have same data type.

Outputs:

tuple. Output is tuple(data, output_index). The data has the same shape of inputs element.

Examples

>>> merge = P.Merge()
>>> input_x = Tensor(np.linspace(0, 8, 8).reshape(2, 4), mindspore.float32)
>>> input_y = Tensor(np.random.randint(-4, 4, (2, 4)), mindspore.float32)
>>> result = merge((input_x, input_y))
class mindspore.ops.operations.Minimum(*args, **kwargs)[source]

Computes the element-wise minimum of input tensors.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Examples

>>> input_x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32)
>>> input_y = Tensor(np.array([4.0, 2.0, 6.0]), mindspore.float32)
>>> minimum = P.Minimum()
>>> minimum(input_x, input_y)
[1.0, 2.0, 3.0]
class mindspore.ops.operations.MirrorPad(*args, **kwargs)[source]

Pads the input tensor according to the paddings and mode.

Parameters

mode (str) – Specifies padding mode. The optional values are “REFLECT”, “SYMMETRIC”. Default: “REFLECT”.

Inputs:
  • input_x (Tensor) - The input tensor.

  • paddings (Tensor) - The paddings tensor. The value of paddings is a matrix(list), and its shape is (N, 2). N is the rank of input data. All elements of paddings are int type. For D th dimension of input, paddings[D, 0] indicates how many sizes to be extended ahead of the D th dimension of the input tensor, and paddings[D, 1] indicates how many sizes to be extended behind of the D th dimension of the input tensor.

Outputs:

Tensor, the tensor after padding.

  • If mode is “REFLECT”, it uses a way of symmetrical copying throught the axis of symmetry to fill in. If the input_x is [[1,2,3],[4,5,6],[7,8,9]] and paddings is [[1,1],[2,2]], then the Outputs is [[6,5,4,5,6,5,4],[3,2,1,2,3,2,1],[6,5,4,5,6,5,4],[9,8,7,8,9,8,7],[6,5,4,5,6,5,4]].

  • If mode is “SYMMETRIC”, the filling method is similar to the “REFLECT”. It is also copied according to the symmetry axis, except that it includes the symmetry axis. If the input_x is [[1,2,3],[4,5,6],[7,8,9]] and paddings is [[1,1],[2,2]], then the Outputs is [[2,1,1,2,3,3,2],[2,1,1,2,3,3,2],[5,4,4,5,6,6,5],[8,7,7,8,9,9,8],[8,7,7,8,9,9,8]].

Examples

>>> from mindspore import Tensor
>>> from mindspore.ops import operations as P
>>> import mindspore.nn as nn
>>> import numpy as np
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.pad = P.MirrorPad(mode="REFLECT")
>>>     def construct(self, x, paddings):
>>>         return self.pad(x, paddings)
>>> x = np.random.random(size=(2, 3)).astype(np.float32)
>>> paddings = Tensor([[1,1],[2,2]])
>>> pad = Net()
>>> ms_output = pad(Tensor(x), paddings)
class mindspore.ops.operations.Mul(*args, **kwargs)[source]

Multiplies two tensors element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> input_y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> mul = P.Mul()
>>> mul(input_x, input_y)
[4, 10, 18]
class mindspore.ops.operations.NMSWithMask(*args, **kwargs)[source]

Select some bounding boxes in descending order of score.

Parameters

iou_threshold (float) – Specifies the threshold of overlap boxes with respect to IOU. Default: 0.5.

Raises

ValueError – If the iou_threshold is not a float number, or if the first dimension of input Tensor is less than or equal to 0, or if the data type of the input Tensor is not float16 or float32.

Inputs:
  • bboxes (Tensor) - The shape of tensor is \((N, 5)\). Input bounding boxes. N is the number of input bounding boxes. Every bounding box contains 5 values, the first 4 values are the coordinates of bounding box, and the last value is the score of this bounding box.

Outputs:

tuple[Tensor], tuple of three tensors, they are selected_boxes, selected_idx and selected_mask.

  • selected_boxes (Tensor) - The shape of tensor is \((N, 5)\). Bounding boxes list after non-max suppression calculation.

  • selected_idx (Tensor) - The shape of tensor is \((N,)\). The indexes list of valid input bounding boxes.

  • selected_mask (Tensor) - The shape of tensor is \((N,)\). A mask list of valid output bounding boxes.

Examples

>>> bbox = np.random.rand(128, 5)
>>> bbox[:, 2] += bbox[:, 0]
>>> bbox[:, 3] += bbox[:, 1]
>>> inputs = Tensor(bbox, mindspore.float32)
>>> nms = P.NMSWithMask(0.5)
>>> output_boxes, indices, mask = nms(inputs)
class mindspore.ops.operations.NPUAllocFloatStatus(*args, **kwargs)[source]

Allocates a flag to store the overflow status.

The flag is a tensor whose shape is (8,) and data type is mindspore.dtype.float32.

Note

Examples: see NPUGetFloatStatus.

Outputs:

Tensor, has the shape of (8,).

Examples

>>> alloc_status = P.NPUAllocFloatStatus()
>>> init = alloc_status()
Tensor([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], shape=(8,), dtype=mindspore.float32)
class mindspore.ops.operations.NPUClearFloatStatus(*args, **kwargs)[source]

Clear the flag which stores the overflow status.

Note

The flag is in the register on the Ascend device. It will be reset and can not be reused again after the NPUClearFloatStatus is called.

Examples: see NPUGetFloatStatus.

Inputs:
  • input_x (Tensor) - The output tensor of NPUAllocFloatStatus.

Outputs:

Tensor, has the same shape as input_x. All the elements in the tensor will be zero.

Examples

>>> alloc_status = P.NPUAllocFloatStatus()
>>> get_status = P.NPUGetFloatStatus()
>>> clear_status = P.NPUClearFloatStatus()
>>> init = alloc_status()
>>> flag = get_status(init)
>>> clear = clear_status(init)
Tensor([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], shape=(8,), dtype=mindspore.float32)
class mindspore.ops.operations.NPUGetFloatStatus(*args, **kwargs)[source]

Updates the flag which is the output tensor of NPUAllocFloatStatus with latest overflow status.

The flag is a tensor whose shape is (8,) and data type is mindspore.dtype.float32. If the sum of the flag equals 0, there is no overflow happened. If the sum of the flag is bigger than 0, there is overflow happened.

Inputs:
  • input_x (Tensor) - The output tensor of NPUAllocFloatStatus.

Outputs:

Tensor, has the same shape as input_x. All the elements in the tensor will be zero.

Examples

>>> alloc_status = P.NPUAllocFloatStatus()
>>> get_status = P.NPUGetFloatStatus()
>>> init = alloc_status()
>>> flag = get_status(init)
Tensor([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], shape=(8,), dtype=mindspore.float32)
class mindspore.ops.operations.Neg(*args, **kwargs)[source]

Returns a tensor with negative values of the input tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is number.

Outputs:

Tensor, has the same shape and dtype as input.

Examples

>>> neg = P.Neg()
>>> input_x = Tensor(np.array([1, 2, -1, 2, 0, -3.5]), mindspore.float32)
>>> result = neg(input_x)
[-1.  -2.   1.  -2.   0.   3.5]
class mindspore.ops.operations.NotEqual(*args, **kwargs)[source]

Computes the non-equivalence of two tensors element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number, bool]) - The first input is a tensor whose data type is number or bool, or a number or a bool object.

  • input_y (Union[Tensor, Number, bool]) - The second input tensor whose data type is same as input_x or a number or a bool object.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is bool.

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> not_equal = P.NotEqual()
>>> not_equal(input_x, 2.0)
[True, False, True]
>>>
>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([1, 2, 4]), mindspore.int32)
>>> not_equal = P.NotEqual()
>>> not_equal(input_x, input_y)
[False, False, True]
class mindspore.ops.operations.OneHot(*args, **kwargs)[source]

Computes a one-hot tensor.

Makes a new tensor, whose locations represented by indices in indices take value on_value, while all other locations take value off_value.

Note

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis.

Parameters

axis (int) – Position to insert the value. e.g. If indices shape is [n, c], and axis is -1 the output shape will be [n, c, depth], If axis is 0 the output shape will be [depth, n, c]. Default: -1.

Inputs:
  • indices (Tensor) - A tensor of indices. Tensor of shape \((X_0, \ldots, X_n)\).

  • depth (int) - A scalar defining the depth of the one hot dimension.

  • on_value (Tensor) - A value to fill in output when indices[j] = i.

  • off_value (Tensor) - A value to fill in output when indices[j] != i.

Outputs:

Tensor, one_hot tensor. Tensor of shape \((X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)\).

Examples

>>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32)
>>> depth, on_value, off_value = 3, Tensor(1.0, mindspore.float32), Tensor(0.0, mindspore.float32)
>>> onehot = P.OneHot()
>>> result = onehot(indices, depth, on_value, off_value)
[[1, 0, 0], [0, 1, 0], [0, 0, 1]]
class mindspore.ops.operations.OnesLike(*args, **kwargs)[source]

Creates a new tensor. All elements’ value are 1.

Returns a tensor of ones with the same shape and type as the input.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, has the same shape and type as input_x but filled with ones.

Examples

>>> oneslike = P.OnesLike()
>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
>>> output = oneslike(x)
class mindspore.ops.operations.PReLU(*args, **kwargs)[source]

Parametric Rectified Linear Unit activation function.

PReLU is described in the paper Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Defined as follows:

\[prelu(x_i)= \max(0, x_i) + \min(0, w * x_i),\]

where \(x_i\) is an element of an channel of the input.

Note

1-dimensional input_x is not supported.

Inputs:
  • input_x (Tensor) - Float tensor, representing the output of the preview layer.

  • weight (Tensor) - Float Tensor, w > 0, there is only two shapes are legitimate, 1 or the number of channels at input.

Outputs:

Tensor, with the same type as input_x.

Detailed information, please refer to nn.PReLU.

Examples

>>> import mindspore
>>> import mindspore.nn as nn
>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.ops import operations as P
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.prelu = P.PReLU()
>>>     def construct(self, input_x, weight):
>>>         result = self.prelu(input_x, weight)
>>>         return result
>>>
>>> input_x = Tensor(np.random.randint(-3, 3, (2, 3, 2)), mindspore.float32)
>>> weight = Tensor(np.array([0.1, 0.6, -0.3]), mindspore.float32)
>>> net = Net()
>>> result = net(input_x, weight)
[[[-0.1      1.        ]
  [ 0.       2.        ]
  [0.        0.        ]]
[[-0.2 -0.1 ]

[2. -1.8000001] [0.6 0.6 ]]]

class mindspore.ops.operations.Pack(*args, **kwargs)[source]

Packs a list of tensors in specified axis.

Packs the list of input tensors with the same rank R, output is a tensor of rank (R+1).

Given input tensors of shape \((x_1, x_2, ..., x_R)\). Set the number of input tensors as N. If \(0 \le axis\), the output tensor shape is \((x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)\).

Parameters

axis (int) – Dimension along which to pack. Default: 0. Negative values wrap around. The range is [-(R+1), R+1).

Inputs:
  • input_x (Union[tuple, list]) - A Tuple or list of Tensor objects with the same shape and type.

Outputs:

Tensor. A packed Tensor with the same type as input_x.

Raises
  • TypeError – If the data types of elements in input_x are not the same.

  • ValueError – If length of input_x is not greater than 1; or if axis is out of the range [-(R+1), R+1); or if the shapes of elements in input_x are not the same.

Examples

>>> data1 = Tensor(np.array([0, 1]).astype(np.float32))
>>> data2 = Tensor(np.array([2, 3]).astype(np.float32))
>>> pack = P.Pack()
>>> output = pack([data1, data2])
[[0, 1], [2, 3]]
class mindspore.ops.operations.Pad(*args, **kwargs)[source]

Pads input tensor according to the paddings.

Parameters

paddings (tuple) – The shape of parameter paddings is (N, 2). N is the rank of input data. All elements of paddings are int type. For D th dimension of input, paddings[D, 0] indicates how many sizes to be extended ahead of the D th dimension of the input tensor, and paddings[D, 1] indicates how many sizes to be extended behind of the D th dimension of the input tensor.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, the tensor after padding.

Examples

>>> input_tensor = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> pad_op = P.Pad(((1, 2), (2, 1)))
>>> output_tensor = pad_op(input_tensor)
>>> assert output_tensor == Tensor(np.array([[ 0. ,  0. ,  0. ,  0. ,  0. ,  0. ],
>>>                                          [ 0. ,  0. , -0.1,  0.3,  3.6,  0. ],
>>>                                          [ 0. ,  0. ,  0.4,  0.5, -3.2,  0. ],
>>>                                          [ 0. ,  0. ,  0. ,  0. ,  0. ,  0. ],
>>>                                          [ 0. ,  0. ,  0. ,  0. ,  0. ,  0. ]]), mindspore.float32)
class mindspore.ops.operations.Pow(*args, **kwargs)[source]

Computes a tensor to the power of the second input.

The first input must be a tensor, and the second input should be a tensor or a number. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be the same. When the inputs are one tensor and one scalar, the scalar could not be a parameter, only could be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor]) - The first input is a tensor whose data type is number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Inputs:
  • input_x (Tensor) - The input tensor.

  • input_y (Union[Tensor, Number]) - The exponent part. If exponent is a tensor, its shape must be able to broadcast to the shape of the input_x.

Outputs:

Tensor, has the same shape as the input_x.

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> input_y = 3.0
>>> pow = P.Pow()
>>> pow(input_x, input_y)
[1.0, 8.0, 64.0]
>>>
>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> input_y = Tensor(np.array([2.0, 4.0, 3.0]), mindspore.float32)
>>> pow = P.Pow()
>>> pow(input_x, input_y)
[1.0, 16.0, 64.0]
class mindspore.ops.operations.Print(*args, **kwargs)[source]

Output tensor or string to stdout.

Note

The print operation cannot support the following cases currently.

  1. The type of tensor is float64 or bool.

  2. The data of tensor is a scalar type.

Inputs:
  • input_x (Union[Tensor, str]) - The graph node to attach to. The input supports multiple strings and tensors which are separated by ‘,’.

Examples

>>> class PrintDemo(nn.Cell):
>>>     def __init__(self):
>>>         super(PrintDemo, self).__init__()
>>>         self.print = P.Print()
>>>
>>>     def construct(self, x, y):
>>>         self.print('Print Tensor x and Tensor y:', x, y)
>>>         return x
class mindspore.ops.operations.ROIAlign(*args, **kwargs)[source]

Computes Region of Interest (RoI) Align operator.

The operator computes the value of each sampling point by bilinear interpolation from the nearby grid points on the feature map. No quantization is performed on any coordinates involved in the RoI, its bins, or the sampling points. The details of (RoI) Align operator are described in Mask R-CNN.

Parameters
  • pooled_height (int) – The output features’ height.

  • pooled_width (int) – The output features’ width.

  • spatial_scale (float) – A scaling factor that maps the raw image coordinates to the input feature map coordinates. Suppose the height of a RoI is ori_h in the raw image and fea_h in the input feature map, the spatial_scale should be fea_h / ori_h.

  • sample_num (int) – Number of sampling points. Default: 2.

Inputs:
  • features (Tensor) - The input features, whose shape should be (N, C, H, W).

  • rois (Tensor) - The shape is (rois_n, 5). rois_n represents the number of RoI. The size of the second dimension should be 5 and the 5 colunms are (image_index, top_left_x, top_left_y, bottom_right_x, bottom_right_y). image_index represents the index of image. top_left_x and top_left_y represent the x, y coordinates of the top left corner of corresponding RoI, respectively. bottom_right_x and bottom_right_y represent the x, y coordinates of the bottom right corner of corresponding RoI, respectively.

Outputs:

Tensor, the shape is (rois_n, C, pooled_height, pooled_width).

Examples

>>> input_tensor = Tensor(np.array([[[[1., 2.], [3., 4.]]]]), mindspore.float32)
>>> rois = Tensor(np.array([[0, 0.2, 0.3, 0.2, 0.3]]), mindspore.float32)
>>> roi_align = P.ROIAlign(1, 1, 0.5, 2)
>>> output_tensor = roi_align(input_tensor, rois)
>>> assert output_tensor == Tensor(np.array([[[[2.15]]]]), mindspore.float32)
class mindspore.ops.operations.RandomChoiceWithMask(*args, **kwargs)[source]

Generates a random samply as index tensor with a mask tensor from a given tensor.

The input must be a tensor of rank >= 1. If its rank >= 2, the first dimension specify the number of sample. The index tensor and the mask tensor have the fixed shapes. The index tensor denotes the index of the nonzero sample, while the mask tensor denotes which elements in the index tensor are valid.

Parameters
  • count (int) – Number of items expected to get and the number should be greater than 0. Default: 256.

  • seed (int) – Random seed. Default: 0.

  • seed2 (int) – Random seed2. Default: 0.

Inputs:
  • input_x (Tensor[bool]) - The input tensor.

Outputs:

Two tensors, the first one is the index tensor and the other one is the mask tensor.

  • index (Tensor) - The output has shape between 2-D and 5-D.

  • mask (Tensor) - The output has shape 1-D.

Examples

>>> rnd_choice_mask = P.RandomChoiceWithMask()
>>> input_x = Tensor(np.ones(shape=[240000, 4]).astype(np.bool))
>>> output_y, output_mask = rnd_choice_mask(input_x)
class mindspore.ops.operations.Rank(*args, **kwargs)[source]

Returns the rank of a tensor.

Returns a 0-D int32 Tensor representing the rank of input; the rank of a tensor is the number of indices required to uniquely select each element of the tensor.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor. 0-D int32 Tensor representing the rank of input, i.e., \(R\).

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> rank = P.Rank()
>>> rank(input_tensor)
class mindspore.ops.operations.ReLU(*args, **kwargs)[source]

Computes ReLU(Rectified Linear Unit) of input tensor element-wise.

It returns \(\max(x,\ 0)\) element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, with the same type and shape as the input_x.

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu = P.ReLU()
>>> result = relu(input_x)
[[0, 4.0, 0.0], [2.0, 0.0, 9.0]]
class mindspore.ops.operations.ReLU6(*args, **kwargs)[source]

Computes ReLU(Rectified Linear Unit) upper bounded by 6 of input tensor element-wise.

It returns \(\min(\max(0,x), 6)\) element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, with the same type and shape as the input_x.

Examples

>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
>>> relu6 = P.ReLU6()
>>> result = relu6(input_x)
class mindspore.ops.operations.ReLUV2(*args, **kwargs)[source]

Computes ReLU(Rectified Linear Unit) of input tensor element-wise.

It returns \(\max(x,\ 0)\) element-wise.

Inputs:
  • input_x (Tensor) - The input tensor should be a 4-D tensor.

Outputs:
  • output (Tensor) - Has the same type and shape as the input_x.

  • mask (Tensor) - A tensor whose data type must be uint8.

Examples

>>> input_x = Tensor(np.array([[[[1, -2], [-3, 4]], [[-5, 6], [7, -8]]]]), mindspore.float32)
>>> relu_v2 = P.ReLUV2()
>>> output = relu_v2(input_x)
([[[[1., 0.], [0., 4.]], [[0., 6.], [7., 0.]]]],
 [[[[1, 0], [2, 0]], [[2, 0], [1, 0]]]])
class mindspore.ops.operations.RealDiv(*args, **kwargs)[source]

Divide the first input tensor by the second input tensor in floating-point type element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> input_y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
>>> realdiv = P.RealDiv()
>>> realdiv(input_x, input_y)
[0.25, 0.4, 0.5]
class mindspore.ops.operations.Reciprocal(*args, **kwargs)[source]

Returns reciprocal of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape as the input_x.

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
>>> reciprocal = P.Reciprocal()
>>> reciprocal(input_x)
[1.0, 0.5, 0.25]
class mindspore.ops.operations.ReduceAll(*args, **kwargs)[source]

Reduce a dimension of a tensor by the “logical and” of all elements in the dimension.

The dtype of the tensor to be reduced is bool.

Parameters

keep_dims (bool) – If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • input_x (Tensor[bool]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed.

Outputs:

Tensor, the dtype is bool.

  • If axis is (), and keep_dims is false, the output is a 0-D tensor representing the “logical and” of of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is false, and keep_dims is false, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is false, the shape of output is \((x_1, x_4, ..., x_R)\).

Examples

>>> input_x = Tensor(np.array([[True, False], [True, True]]))
>>> op = P.ReduceAll(keep_dims=True)
>>> output = op(input_x, 1)
class mindspore.ops.operations.ReduceMax(*args, **kwargs)[source]

Reduce a dimension of a tensor by the maximum value in this dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed.

Outputs:

Tensor, has the same dtype as the ‘input_x’.

  • If axis is (), and keep_dims is false, the output is a 0-D tensor representing the maximum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is false, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is false, the shape of output is \((x_1, x_4, ..., x_R)\).

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = P.ReduceMax(keep_dims=True)
>>> output = op(input_x, 1)
class mindspore.ops.operations.ReduceMean(*args, **kwargs)[source]

Reduce a dimension of a tensor by averaging all elements in the dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default : False.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed.

Outputs:

Tensor, has the same dtype as the ‘input_x’.

  • If axis is (), and keep_dims is false, the output is a 0-D tensor representing the sum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is false, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is false, the shape of output is \((x_1, x_4, ..., x_R)\).

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = P.ReduceMean(keep_dims=True)
>>> output = op(input_x, 1)
class mindspore.ops.operations.ReduceMin(*args, **kwargs)[source]

Reduce a dimension of a tensor by the minimum value in the dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed.

Outputs:

Tensor, has the same dtype as the ‘input_x’.

  • If axis is (), and keep_dims is false, the output is a 0-D tensor representing the minimum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is false, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is false, the shape of output is \((x_1, x_4, ..., x_R)\).

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = P.ReduceMin(keep_dims=True)
>>> output = op(input_x, 1)
class mindspore.ops.operations.ReduceOp[source]

Operation options for reduce tensors.

There are four kinds of operation options, “SUM”,”MAX”,”MIN”,”PROD”.

  • SUM: Take the sum.

  • MAX: Take the maximum.

  • MIN: Take the minimum.

  • PROD: Take the product.

class mindspore.ops.operations.ReduceProd(*args, **kwargs)[source]

Reduce a dimension of a tensor by multiplying all elements in the dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default : False, don’t keep these reduced dimensions.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions.

Outputs:

Tensor, has the same dtype as the ‘input_x’.

  • If axis is (), and keep_dims is false, the output is a 0-D tensor representing the product of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is false, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is false, the shape of output is \((x_1, x_4, ..., x_R)\).

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = P.ReduceProd(keep_dims=True)
>>> output = op(input_x, 1)
class mindspore.ops.operations.ReduceScatter(*args, **kwargs)[source]

Reduces and scatters tensors from the specified communication group.

Note

The back propagation of the op is not surported yet. Stay tuned for more. Tensor must have the same shape and format in all processes participating in the collective.

Parameters
  • op (str) – Specifies an operation used for element-wise reductions, like sum, max, avg. Default: ReduceOp.SUM.

  • group (str) – The communication group to work on. Default: “hccl_world_group”.

Raises
  • TypeError – If any of op and group is not a string

  • ValueError – If the first dimension of input can not be divided by rank size.

Examples

>>> from mindspore.communication import init
>>> import mindspore.ops.operations as P
>>> init('nccl')
>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.reducescatter = P.ReduceScatter(ReduceOp.SUM, group="nccl_world_group")
>>>
>>>     def construct(self, x):
>>>         return self.reducescatter(x)
>>>
>>> input_ = Tensor(np.ones([2, 8]).astype(np.float32))
>>> net = Net()
>>> output = net(input_)
class mindspore.ops.operations.ReduceSum(*args, **kwargs)[source]

Reduce a dimension of a tensor by summing all elements in the dimension.

The dtype of the tensor to be reduced is number.

Parameters

keep_dims (bool) – If True, keep these reduced dimensions and the length is 1. If False, don’t keep these dimensions. Default : False.

Inputs:
  • input_x (Tensor[Number]) - The input tensor.

  • axis (Union[int, tuple(int), list(int)]) - The dimensions to reduce. Default: (), reduce all dimensions. Only constant value is allowed.

Outputs:

Tensor, has the same dtype as the ‘input_x’.

  • If axis is (), and keep_dims is false, the output is a 0-D tensor representing the sum of all elements in the input tensor.

  • If axis is int, set as 2, and keep_dims is false, the shape of output is \((x_1, x_3, ..., x_R)\).

  • If axis is tuple(int), set as (2, 3), and keep_dims is false, the shape of output is \((x_1, x_4, ..., x_R)\).

Examples

>>> input_x = Tensor(np.random.randn(3, 4, 5, 6).astype(np.float32))
>>> op = P.ReduceSum(keep_dims=True)
>>> output = op(input_x, 1)
class mindspore.ops.operations.Reshape(*args, **kwargs)[source]

Reshapes input tensor with the same values based on a given shape tuple.

Raises

ValueError – Given a shape tuple, if it has more than one -1; or if the product of its elements is less than or equal to 0 or cannot be divided by the product of the input tensor shape; or if it does not match the input’s array size.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_shape (tuple[int]) - The input tuple is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\). Only constant value is allowed.

Outputs:

Tensor, the shape of tensor is \((y_1, y_2, ..., y_S)\).

Examples

>>> input_tensor = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> reshape = P.Reshape()
>>> output = reshape(input_tensor, (3, 2))
class mindspore.ops.operations.ResizeBilinear(*args, **kwargs)[source]

Resizes the image to certain size using bilinear interpolation.

The resizing only affects the lower two dimensions which represent the height and width. The input images can be represented by different data types, but the data types of output images are always float32.

Parameters
  • size (tuple[int]) – A tuple of 2 int elements (new_height, new_width), the new size for the images.

  • align_corners (bool) – If it’s true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If it’s false, rescale by new_height / height. Default: False.

Inputs:
  • input (Tensor) - Image to be resized. Tensor of shape (N_i, …, N_n, height, width).

Outputs:

Tensor, resized image. Tensor of shape (N_i, …, N_n, new_height, new_width) in float32.

Examples

>>> tensor = Tensor([[[[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]]], mindspore.int32)
>>> resize_bilinear = P.ResizeBilinear((5, 5))
>>> result = resize_bilinear(tensor)
>>> assert result.shape() == (5, 5)
class mindspore.ops.operations.ResizeNearestNeighbor(*args, **kwargs)[source]

Resize the input tensor by using nearest neighbor algorithm.

Resize input tensor to given size by using nearest neighbor algorithm. The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant.

Parameters
  • size (Union[tuple, list]) – The target size. The dimension of size must be 2.

  • align_corners (bool) – Whether the centers of the 4 corner pixels of the input and output tensors are aligned. Default: False.

Inputs:
  • input_x (Tensor) - The input tensor. The shape of the tensor is \((N, C, H, W)\).

Outputs:

Tensor, the shape of the output tensor is \((N, NEW\_C, NEW\_H, W)\).

Examples

>>> input_tensor = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
>>> resize = P.ResizeNearestNeighbor((2, 2))
>>> output = resize(input_tensor)
class mindspore.ops.operations.Round(*args, **kwargs)[source]

Returns half to even of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape and type as the input_x.

Examples

>>> input_x = Tensor(np.array([0.8, 1.5, 2.3, 2.5, -4.5]), mindspore.float32)
>>> round = P.Round()
>>> round(input_x)
[1.0, 2.0, 2.0, 2.0, -4.0]
class mindspore.ops.operations.Rsqrt(*args, **kwargs)[source]

Computes reciprocal of square root of input tensor element-wise.

Inputs:
  • input_x (Tensor) - The input of Rsqrt. Each element should be a non-negative number.

Outputs:

Tensor, has the same type and shape as input_x.

Examples

>>> input_tensor = Tensor([[4, 4], [9, 9]], mindspore.float32)
>>> rsqrt = P.Rsqrt()
>>> rsqrt(input_tensor)
[[0.5, 0.5], [0.333333, 0.333333]]
class mindspore.ops.operations.SGD(*args, **kwargs)[source]

Computes stochastic gradient descent (optionally with momentum).

Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning.

Note

For details, please refer to nn.SGD source code.

Parameters
  • dampening (float) – The dampening for momentum. Default: 0.0.

  • weight_decay (float) – Weight decay (L2 penalty). Default: 0.0.

  • nesterov (bool) – Enable Nesterov momentum. Default: False.

Inputs:
  • parameters (Tensor) - Parameters to be updated. Their data type can be list or tuple.

  • gradient (Tensor) - Gradients.

  • learning_rate (Tensor) - Learning rate. Must be float value. e.g. Tensor(0.1, mindspore.float32).

  • accum (Tensor) - Accum(velocity) to be updated.

  • momentum (Tensor) - Momentum. e.g. Tensor(0.1, mindspore.float32).

  • stat (Tensor) - States to be updated with the same shape as gradient.

Outputs:

Tensor, parameters to be updated.

Examples

>>> sgd = P.SGD()
>>> parameters = Tensor(np.array([2, -0.5, 1.7, 4]), mindspore.float32)
>>> gradient = Tensor(np.array([1, -1, 0.5, 2]), mindspore.float32)
>>> learning_rate = Tensor(0.01, mindspore.float32)
>>> accum = Tensor(np.array([0.1, 0.3, -0.2, -0.1]), mindspore.float32)
>>> momentum = Tensor(0.1, mindspore.float32)
>>> stat = Tensor(np.array([1.5, -0.3, 0.2, -0.7]), mindspore.float32)
>>> result = sgd(parameters, gradient, learning_rate, accum, momentum, stat)
class mindspore.ops.operations.SameTypeShape(*args, **kwargs)[source]

Checks whether data type and shape of two tensors are the same.

Raises

ValueError – If not the same.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_y (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_S)\).

Outputs:

Tensor, the shape of tensor is \((x_1, x_2, ..., x_R)\), if data type and shape of input_x and input_y are the same.

Examples

>>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> input_y = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> out = P.SameTypeShape()(input_x, input_y)
class mindspore.ops.operations.ScalarCast(*args, **kwargs)[source]

Cast the input scalar to another type.

Inputs:
  • input_x (scalar) - The input scalar. Only constant value is allowed.

  • input_y (mindspore.dtype) - The type should cast to be. Only constant value is allowed.

Outputs:

Scalar. The type is same as the python type corresponding to input_y.

Examples

>>> scalar_cast = P.ScalarCast()
>>> output = scalar_cast(255.0, mindspore.int32)
class mindspore.ops.operations.ScalarSummary(*args, **kwargs)[source]

Output scalar to protocol buffer through scalar summary operator.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of scalar.

Examples

>>> class SummaryDemo(nn.Cell):
>>>     def __init__(self,):
>>>         super(SummaryDemo, self).__init__()
>>>         self.summary = P.ScalarSummary()
>>>         self.add = P.TensorAdd()
>>>
>>>     def construct(self, x, y):
>>>         name = "x"
>>>         self.summary(name, x)
>>>         x = self.add(x, y)
>>>         return x
class mindspore.ops.operations.ScalarToArray(*args, **kwargs)[source]

Converts scalar to Tensor.

Inputs:
  • input_x (Union[int, float]) - The input is a scalar. Only constant value is allowed.

Outputs:

Tensor. 0-D Tensor and the content is the input.

Examples

>>> op = P.ScalarToArray()
>>> data = 1.0
>>> output = op(data)
class mindspore.ops.operations.ScalarToTensor(*args, **kwargs)[source]

Converts scalar to Tensor, and convert data type to specified type.

Inputs:
  • input_x (Union[int, float]) - The input is a scalar. Only constant value is allowed.

  • dtype (mindspore.dtype) - The target data type. Default: mindspore.float32. Only constant value is allowed.

Outputs:

Tensor. 0-D Tensor and the content is the input.

Examples

>>> op = P.ScalarToTensor()
>>> data = 1
>>> output = op(data, mindspore.float32)
class mindspore.ops.operations.ScatterMax(*args, **kwargs)[source]

Update the value of the input tensor through the max operation.

Using given values to update tensor value through the max operation, along with the input indices,.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target parameter.

  • indices (Tensor) - The index to do max operation whose data type should be int.

  • updates (Tensor) - The tensor doing the maximum operation with ‘input_x’, the data type is same as ‘input_x’, the shape is ‘indices_shape + x_shape[1:]’.

Outputs:

Tensor, has the same shape and data type as input_x.

Examples

>>> input_x = Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32)
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.ones([2, 2, 3]) * 88, mindspore.float32)
>>> scatter_max = P.ScatterMax()
>>> output = scatter_max(input_x, indices, update)
[[88.0, 88.0, 88.0], [88.0, 88.0, 88.0]]
class mindspore.ops.operations.ScatterNd(*args, **kwargs)[source]

Scatters a tensor into a new tensor depending on the specified indices.

Creates an empty tensor, and set values by scattering the update tensor depending on indices.

Inputs:
  • indices (Tensor) - The index of scattering in the new tensor.

  • update (Tensor) - The source Tensor to be scattered.

  • shape (tuple[int]) - Define the shape of the output tensor. Has the same type as indices.

Outputs:

Tensor, the new tensor, has the same type as update and the same shape as shape.

Examples

>>> op = P.ScatterNd()
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.array([3.2, 1.1]), mindspore.float32)
>>> shape = (3, 3)
>>> output = op(indices, update, shape)
class mindspore.ops.operations.ScatterNdUpdate(*args, **kwargs)[source]

Update tensor value by using input indices and value.

Using given values to update tensor value, along with the input indices.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter.

  • indices (Tensor) - The index of input tensor.

  • update (Tensor) - The tensor to add to the input tensor, has the same type as input.

Outputs:

Tensor, has the same shape and type as input_x.

Examples

>>> input_x = mindspore.Parameter(Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32))
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = P.ScatterNdUpdate()
>>> output = op(input_x, indices, update)
class mindspore.ops.operations.ScatterUpdate(*args, **kwargs)[source]

Update tensor value by using input indices and value.

Using given values to update tensor value, along with the input indices.

Parameters

use_locking (bool) – Whether protect the assignment by a lock. Default: True.

Inputs:
  • input_x (Parameter) - The target tensor, with data type of Parameter.

  • indices (Tensor) - The index of input tensor.

  • update (Tensor) - The tensor to update the input tensor, has the same type as input, and update.shape = indices.shape + input_x.shape[1:].

Outputs:

Tensor, has the same shape and type as input_x.

Examples

>>> input_x = mindspore.Parameter(Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32))
>>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
>>> update = Tensor(np.array([1.0, 2.2]), mindspore.float32)
>>> op = P.ScatterNdUpdate()
>>> output = op(input_x, indices, update)
class mindspore.ops.operations.Select(*args, **kwargs)[source]

Return the selected elements, either from input \(x\) or input \(y\), depending on the condition.

Given a tensor as input, this operation inserts a dimension of 1 at the dimension, if both \(x\) and \(y\) are none, the operation returns the coordinates of the true element in the condition, the coordinates are returned as a two-dimensional tensor, where the first dimension (row) represents the number of true elements and the second dimension (columns) represents the coordinates of the true elements. Keep in mind that the shape of the output tensor can vary depending on how much of the true value is in the input. Indexes are output in row-first order.

If neither is None, \(x\) and \(y\) must have the same shape. If \(x\) and \(y\) are scalars, the conditional tensor must be a scalar. If \(x\) and \(y\) are higher-demensional vectors, the condition must be a vector whose size matches the first dimension of \(x\), or must have the same shape as \(y\).

The conditional tensor acts as an optional compensation (mask), which determines whether the corresponding element / row in the output should be selected from \(x\) (if true) or \(y\) (if false) based on the value of each element.

If condition is a vector, then \(x\) and \(y\) are higher-demensional matrices, then it chooses to copy that row (external dimensions) from \(x\) and \(y\). If condition has the same shape as \(x\) and \(y\), you can choose to copy these elements from \(x\) and \(y\).

Inputs:
  • input_x (Tensor[bool]) - The shape is \((x_1, x_2, ..., x_N)\). The condition tensor, decides whose element is chosen.

  • input_y (Tensor) - The shape is \((x_1, x_2, ..., x_N, ..., x_R)\). The first input tensor.

  • input_z (Tensor) - The shape is \((x_1, x_2, ..., x_N, ..., x_R)\). The second input tensor.

Outputs:

Tensor, has the same shape as input_y. The shape is \((x_1, x_2, ..., x_N, ..., x_R)\).

Examples

>>> select = P.Select()
>>> input_x = Tensor([True, False])
>>> input_y = Tensor([2,3], mindspore.float32)
>>> input_z = Tensor([1,2], mindspore.float32)
>>> select(input_x, input_y, input_z)
class mindspore.ops.operations.Shape(*args, **kwargs)[source]

Returns the shape of input tensor.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[int], the output tuple is constructed by multiple integers, \((x_1, x_2, ..., x_R)\).

Examples

>>> input_tensor = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> shape = P.Shape()
>>> output = shape(input_tensor)
class mindspore.ops.operations.Sigmoid(*args, **kwargs)[source]

Sigmoid activation function.

Computes Sigmoid of input element-wise. The Sigmoid function is defined as:

\[\text{sigmoid}(x_i) = \frac{1}{1 + exp(-x_i)},\]

where \(x_i\) is the element of the input.

Inputs:
  • input_x (Tensor) - The input of Sigmoid.

Outputs:

Tensor, with the same type and shape as the input_x.

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> sigmoid = P.Sigmoid()
>>> sigmoid(input_x)
[0.73105866, 0.880797, 0.9525742, 0.98201376, 0.9933071]
class mindspore.ops.operations.SigmoidCrossEntropyWithLogits(*args, **kwargs)[source]

Uses the given logits to compute sigmoid cross entropy.

Note

Sets input logits as X, input label as Y, output as loss. Then,

\[p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}}\]
\[loss_{ij} = -[Y_{ij} * ln(p_{ij}) + (1 - Y_{ij})ln(1 - p_{ij})]\]
Inputs:
  • logits (Tensor) - Input logits.

  • label (Tensor) - Ground truth label.

Outputs:

Tensor, with the same shape and type as input logits.

Examples

>>> logits = Tensor(np.random.randn(2, 3).astype(np.float16))
>>> labels = Tensor(np.random.randn(2, 3).astype(np.float16))
>>> sigmoid = P.SigmoidCrossEntropyWithLogits()
>>> sigmoid(logits, labels)
class mindspore.ops.operations.Sign(*args, **kwargs)[source]

Perform \(sign\) on tensor element-wise.

Note

\[sign(x) = \begin{cases} -1, &if\ x < 0 \cr 0, &if\ x == 0 \cr 1, &if\ x > 0\end{cases}\]
Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, has the same shape and type as the input_x.

Examples

>>> input_x = Tensor(np.array([[2.0, 0.0, -1.0]]), mindspore.float32)
>>> sign = P.Sign()
>>> output = sign(input_x)
[[1.0, 0.0, -1.0]]
class mindspore.ops.operations.Sin(*args, **kwargs)[source]

Computes sine of input element-wise.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, has the same shape as input_x.

Examples

>>> sin = P.Sin()
>>> input_x = Tensor(np.array([0.62, 0.28, 0.43, 0.62]), mindspore.float32)
>>> output = sin(input_x)
class mindspore.ops.operations.Size(*args, **kwargs)[source]

Returns the elements count size of a tensor.

Returns an int scalar representing the elements size of input, the total number of elements in the tensor.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

int, a scalar representing the elements size of input_x, tensor is the number of elements in a tensor, \(size=x_1*x_2*...x_R\).

Examples

>>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
>>> size = P.Size()
>>> output = size(input_tensor)
class mindspore.ops.operations.Slice(*args, **kwargs)[source]

Slice a tensor in specified shape.

Parameters
  • x (Tensor) – The target tensor.

  • begin (tuple) – The beginning of the slice. Only constant value is allowed.

  • size (tuple) – The size of the slice. Only constant value is allowed.

Returns

Tensor.

Examples

>>> data = Tensor(np.array([[[1, 1, 1], [2, 2, 2]],
>>>                         [[3, 3, 3], [4, 4, 4]],
>>>                         [[5, 5, 5], [6, 6, 6]]]).astype(np.int32))
>>> type = P.Slice()(data, (1, 0, 0), (1, 1, 3))
class mindspore.ops.operations.SmoothL1Loss(*args, **kwargs)[source]

Computes smooth L1 loss, a robust L1 loss.

SmoothL1Loss is a Loss similar to MSELoss but less sensitive to outliers as described in the Fast R-CNN by Ross Girshick.

Note

Sets input prediction as X, input target as Y, output as loss. Then,

\[\text{SmoothL1Loss} = \begin{cases}0.5x^{2}, &if \left |x \right |\leq \text{sigma} \cr \left |x \right|-0.5, &\text{otherwise}\end{cases}\]
Parameters

sigma (float) – A parameter used to control the point where the function will change from quadratic to linear. Default: 1.0.

Inputs:
  • prediction (Tensor) - Predict data.

  • target (Tensor) - Ground truth data, with the same type and shape as prediction.

Outputs:

Tensor, with the same type and shape as prediction.

Examples

>>> loss = P.SmoothL1Loss()
>>> input_data = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> target_data = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> loss(input_data, target_data)
[0, 0, 0.5]
class mindspore.ops.operations.Softmax(*args, **kwargs)[source]

Softmax operation.

Applies the Softmax operation to the input tensor on the specified axis. Suppose a slice along the given aixs \(x\) then for each element \(x_i\) the Softmax function is shown as follows:

\[\text{output}(x_i) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)},\]

where \(N\) is the length of the tensor.

Parameters

axis (Union[int, tuple]) – The axis to do the Softmax operation. Default: -1.

Inputs:
  • logits (Tensor) - The input of Softmax.

Outputs:

Tensor, with the same type and shape as the logits.

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softmax = P.Softmax()
>>> softmax(input_x)
[0.01165623, 0.03168492, 0.08612854, 0.23412167, 0.6364086]
class mindspore.ops.operations.SoftmaxCrossEntropyWithLogits(*args, **kwargs)[source]

Gets the softmax cross-entropy value between logits and labels which shoule be one-hot encoding.

Note

Sets input logits as X, input label as Y, output as loss. Then,

\[p_{ij} = softmax(X_{ij}) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}\]
\[loss_{ij} = -\sum_j{Y_{ij} * ln(p_{ij})}\]
Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\).

  • labels (Tensor) - Ground truth labels, with shape \((N, C)\).

Outputs:

Tuple of 2 Tensor, the loss shape is (N,), and the dlogits with the same shape as logits.

Examples

>>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32)
>>> labels = Tensor([[0, 0, 0, 0, 1], [0, 0, 0, 1, 0]], mindspore.float32)
>>> softmax_cross = P.SoftmaxCrossEntropyWithLogits()
>>> loss, backprop = softmax_cross(logits, labels)
([0.5899297, 0.52374405], [[0.02760027, 0.20393994, 0.01015357, 0.20393994, -0.44563377],
[0.08015892, 0.02948882, 0.08015892, -0.4077012, 0.21789455]])
class mindspore.ops.operations.Softplus(*args, **kwargs)[source]

Softplus activation function.

Softplus is a smooth approximation to the ReLU function. The function is shown as follows:

\[\text{output} = \log(1 + \exp(\text{input_x})),\]
Inputs:
  • input_x (Tensor) - The input tensor whose data type should be float.

Outputs:

Tensor, with the same type and shape as the input_x.

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> softplus = P.Softplus()
>>> softplus(input_x)
[1.3132615, 2.126928, 3.0485873, 4.01815, 5.0067153]
class mindspore.ops.operations.SpaceToBatch(*args, **kwargs)[source]

Divide spatial dimensions into blocks and combine the block size with the original batch.

This operation will divide spatial dimensions (H, W) into blocks with block_size, the output tensor’s H and W dimension is the corresponding number of blocks after division. The output tensor’s batch dimension is the product of the original batch and the square of block_size. Prior to division into blocks, the spatial dimensions of the input are zero padded according to paddings if necessary.

Parameters
  • block_size (int) – The block size of dividing block with value >= 2.

  • paddings (list) – The padding value for H and W dimension, containing 2 sub list, each containing 2 int value. All values must be >= 0. paddings[i] specifies the paddings for spatial dimension i, which corresponds to input dimension i+2. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1] is divisible by block_size.

Inputs:
  • input_x (Tensor) - The input tensor.

Outputs:

Tensor, the output tensor with the same type as input. Assume input shape is \((n, c, h, w)\) with \(block\_size\) and \(padddings\). The output tensor shape will be \((n', c', h', w')\), where

\(n' = n*(block\_size*block\_size)\)

\(c' = c\)

\(h' = (h+paddings[0][0]+paddings[0][1])//block\_size\)

\(w' = (w+paddings[1][0]+paddings[1][1])//block\_size\)

Examples

>>> block_size = 2
>>> paddings = [[0, 0], [0, 0]]
>>> space_to_batch = P.SpaceToBatch(block_size, paddings)
>>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
>>> space_to_batch(input_x)
[[[[1.]]], [[[2.]]], [[[3.]]], [[[4.]]]]
class mindspore.ops.operations.SpaceToDepth(*args, **kwargs)[source]

Rearrange blocks of spatial data into depth.

The output tensor’s height dimension is \(height / block\_size\).

The output tensor’s weight dimension is \(weight / block\_size\).

The depth of output tensor is \(block\_size * block\_size * input\_depth\).

The input tensor’s height and width must be divisible by block_size. The data format is “NCHW”.

Parameters

block_size (int) – The block size used to divide spatial data. It must be >= 2.

Inputs:
  • x (Tensor) - The target tensor.

Outputs:

Tensor, the same type as x.

Examples

>>> x = Tensor(np.random.rand(1,3,2,2), mindspore.float32)
>>> block_size = 2
>>> op = P.SpaceToDepth(block_size)
>>> output = op(x)
>>> output.asnumpy().shape == (1,12,1,1)
class mindspore.ops.operations.SparseApplyAdagrad(*args, **kwargs)[source]

Update relevant entries according to the adagrad scheme.

\[accum += grad * grad\]
\[var -= lr * grad * (1 / sqrt(accum))\]
Parameters
  • lr (float) – Learning rate.

  • use_locking (bool) – If True, updating of the var and accum tensors will be protected. Default: False.

Inputs:
  • var (Tensor) - Variable to be updated. The type must be float32.

  • accum (Tensor) - Accum to be updated. The shape must be the same as var’s shape, the type must be float32.

  • grad (Tensor) - Gradient. The shape must be the same as var’s shape except first dimension, the type must be float32.

  • indices (Tensor) - A vector of indices into the first dimension of var and accum. The shape of indices must be the same as grad in first dimension, the type must be int32.

Outputs:

Tensor, has the same shape and type as var.

Examples

>>> var = Tensor(np.random.random((3, 3)), mindspore.float32)
>>> accum = Tensor(np.random.random((3, 3)), mindspore.float32)
>>> grad = Tensor(np.random.random((3, 3)), mindspore.float32)
>>> indices = Tensor(np.ones((3,), np.int32))
>>> sparse_apply_ada_grad = P.SparseApplyAdagrad(0.5)
>>> sparse_apply_ada_grad(var, accum, grad, indices)
class mindspore.ops.operations.SparseSoftmaxCrossEntropyWithLogits(*args, **kwargs)[source]

Computes the softmax cross-entropy value between logits and sparse encoding labels.

Note

Sets input logits as X, input label as Y, output as loss. Then,

\[p_{ij} = softmax(X_{ij}) = \frac{exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)}\]
\[loss_{ij} = \begin{cases} -ln(p_{ij}), &j = y_i \cr -ln(1 - p_{ij}), & j \neq y_i \end{cases}\]
\[loss = \sum_{ij} loss_{ij}\]
Parameters

is_grad (bool) – If it’s true, this operation returns the computed gradient. Default: False.

Inputs:
  • logits (Tensor) - Input logits, with shape \((N, C)\).

  • labels (Tensor) - Ground truth labels, with shape \((N)\).

Outputs:

Tensor, if is_grad is False, the output tensor is the value of loss which is a scalar tensor; if is_grad is True, the output tensor is the gradient of input with the same shape as logits.

Examples

Please refer to the usage in nn.SoftmaxCrossEntropyWithLogits source code.

class mindspore.ops.operations.Split(*args, **kwargs)[source]

Splits input tensor into output_num of tensors along the given axis and output numbers.

Parameters
  • axis (int) – Index of the split position. Default: 0.

  • output_num (int) – The number of output tensors. Default: 1.

Raises

ValueError – If axis is out of the range [-len(input_x.shape()), len(input_x.shape())), or if the output_num is less than or equal to 0, or if the dimension which to split cannot be evenly divided by output_num.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

tuple[Tensor], the shape of each output tensor is same, which is \((y_1, y_2, ..., y_S)\).

Examples

>>> split = P.Split(1, 2)
>>> x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]))
>>> output = split(x)
class mindspore.ops.operations.Sqrt(*args, **kwargs)[source]

Returns square root of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is number.

Outputs:

Tensor, has the same shape as the input_x.

Examples

>>> input_x = Tensor(np.array([1.0, 4.0, 9.0]), mindspore.float32)
>>> sqrt = P.Sqrt()
>>> sqrt(input_x)
[1.0, 2.0, 3.0]
class mindspore.ops.operations.Square(*args, **kwargs)[source]

Returns square of a tensor element-wise.

Inputs:
  • input_x (Tensor) - The input tensor whose dtype is number.

Outputs:

Tensor, has the same shape and dtype as the input_x.

Examples

>>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
>>> square = P.Square()
>>> square(input_x)
[1.0, 4.0, 9.0]
class mindspore.ops.operations.SquareSumAll(*args, **kwargs)[source]

Returns square sum all of a tensor element-wise

Inputs:
  • input_x1 (Tensor) - The input tensor.

  • input_x2 (Tensor) - The input tensor same type and shape as the input_x1.

Note

SquareSumAll only supports float16 and float32 data type.

Outputs:
  • output_y1 (Tensor) - The same type as the input_x1.

  • output_y2 (Tensor) - The same type as the input_x1.

Examples

>>> input_x1 = Tensor(np.random.randint([3, 2, 5, 7]), mindspore.float32)
>>> input_x2 = Tensor(np.random.randint([3, 2, 5, 7]), mindspore.float32)
>>> square_sum_all = P.SquareSumAll()
>>> square_sum_all(input_x1, input_x2)
class mindspore.ops.operations.Squeeze(*args, **kwargs)[source]

Returns a tensor with the same type but dimensions of 1 being removed based on axis.

Note

The dimension index starts at 0 and must be in the range [-input.dim(), input.dim()).

Raises

ValueError – If the corresponding dimension of the specified axis does not equal to 1.

Parameters

axis (int) – Specifies the dimension indexes of shape to be removed, which will remove all the dimensions that are equal to 1. If specified, it must be int32 or int64. Default: (), an empty tuple.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

Outputs:

Tensor, the shape of tensor is \((x_1, x_2, ..., x_S)\).

Examples

>>> input_tensor = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
>>> squeeze = P.Squeeze(2)
>>> output = squeeze(input_tensor)
class mindspore.ops.operations.StridedSlice(*args, **kwargs)[source]

Extracts a strided slice of a tensor.

Given an input tensor, this operation inserts a dimension of length 1 at the dimension. This operation extracts a fragment of size (end-begin)/stride from the given ‘input_tensor’. Starting from the position specified by the begin, the fragment continues adding stride to the index until all dimensions are not less than end.

Note

The stride may be negative value, which causes reverse slicing. The shape of begin, end and strides should be the same.

Parameters
  • begin_mask (int) – Starting index of the slice. Default: 0.

  • end_mask (int) – Ending index of the slice. Default: 0.

  • ellipsis_mask (int) – An int mask. Default: 0.

  • new_axis_mask (int) – An int mask. Default: 0.

  • shrink_axis_mask (int) – An int mask. Default: 0.

Inputs:
  • input_x (Tensor) - The input Tensor.

  • begin (tuple[int]) - A tuple which represents the location where to start. Only constant value is allowed.

  • end (tuple[int]) - A tuple or which represents the maximum location where to stop. Only constant value is allowed.

  • strides (tuple[int]) - A tuple which represents the stride continuously added before reach the maximum location. Only constant value is allowed.

Outputs:

Tensor. Explain with the following example.

  • In the 0th dim, begin is 1, end is 2, and strides is 1, because \(1+1=2\geq2\), the interval is \([1,2)\). Thus, return the element with \(index = 1\) in 0th dim, i.e., [[3, 3, 3], [4, 4, 4]].

  • In the 1st dim, similarly, the interval is \([0,1)\). Based on the return value of the 0th dim, return the element with \(index = 0\), i.e., [3, 3, 3].

  • In the 2nd dim, similarly, the interval is \([0,3)\). Based on the return value of the 1st dim, return the element with \(index = 0,1,2\), i.e., [3, 3, 3].

  • Finally, the output is [3, 3, 3].

Examples
>>> input_x = Tensor([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]],
>>>                   [[5, 5, 5], [6, 6, 6]]], mindspore.float32)
>>> slice = P.StridedSlice()
>>> output = slice(input_x, (1, 0, 0), (2, 1, 3), (1, 1, 1))
>>> output.shape()
(1, 1, 3)
>>> output
[[[3, 3, 3]]]
class mindspore.ops.operations.Sub(*args, **kwargs)[source]

Subtracts the second input tensor from the first input tensor element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Examples

>>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32)
>>> input_y = Tensor(np.array([4, 5, 6]), mindspore.int32)
>>> sub = P.Sub()
>>> sub(input_x, input_y)
[-3, -3, -3]
class mindspore.ops.operations.Tanh(*args, **kwargs)[source]

Tanh activation function.

Computes hyperbolic tangent of input element-wise. The Tanh function is defined as:

\[tanh(x_i) = \frac{\exp(x_i) - \exp(-x_i)}{\exp(x_i) + \exp(-x_i)} = \frac{\exp(2x_i) - 1}{\exp(2x_i) + 1},\]

where \(x_i\) is an element of the input Tensor.

Inputs:
  • input_x (Tensor) - The input of Tanh.

Outputs:

Tensor, with the same type and shape as the input_x.

Examples

>>> input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
>>> tanh = P.Tanh()
>>> tanh(input_x)
[0.7615941, 0.9640276, 0.9950548, 0.9993293, 0.99990916]
class mindspore.ops.operations.TensorAdd(*args, **kwargs)[source]

Adds two input tensors element-wise.

The inputs must be two tensors or one tensor and one scalar. When the inputs are two tensors, the shapes of them could be broadcast, and the data types of them should be same. When the inputs are one tensor and one scalar, the scalar cannot be a parameter, only can be a constant, and the type of the scalar is the same as the data type of the tensor.

Inputs:
  • input_x (Union[Tensor, Number]) - The first input is a tensor whose data type is number or a number.

  • input_y (Union[Tensor, Number]) - The second input is a tensor whose data type is same as ‘input_x’ or a number.

Outputs:

Tensor, the shape is same as the shape after broadcasting, and the data type is same as ‘input_x’.

Examples

>>> add = P.TensorAdd()
>>> input_x = Tensor(np.array([1,2,3]).astype(np.float32))
>>> input_y = Tensor(np.array([4,5,6]).astype(np.float32))
>>> add(input_x, input_y)
[5,7,9]
class mindspore.ops.operations.TensorSummary(*args, **kwargs)[source]

Output tensor to protocol buffer through tensor summary operator.

Inputs:
  • name (str) - The name of the input variable.

  • value (Tensor) - The value of tensor.

Examples

>>> class SummaryDemo(nn.Cell):
>>>     def __init__(self,):
>>>         super(SummaryDemo, self).__init__()
>>>         self.summary = P.TensorSummary()
>>>         self.add = P.TensorAdd()
>>>
>>>     def construct(self, x, y):
>>>         x = self.add(x, y)
>>>         name = "x"
>>>         self.summary(name, x)
>>>         return x
class mindspore.ops.operations.Tile(*args, **kwargs)[source]

Replicates a tensor with given multiples times.

Creates a new tensor by replicating input multiples times. The dimension of output tensor is the larger of the dimension length of input and the length of multiples.

Inputs:
  • input_x (Tensor) - 1-D or higher Tensor. Set the shape of input tensor as \((x_1, x_2, ..., x_S)\).

  • multiples (tuple[int]) - The input tuple is constructed by multiple integers, i.e., \((y_1, y_2, ..., y_S)\). The length of multiples can’t be smaller than the length of shape in input_x.

Outputs:

Tensor, has the same type as the input_x.

  • If the length of multiples is the same as the length of shape in input_x, then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((x_1*y_1, x_2*y_2, ..., x_S*y_R)\).

  • If the length of multiples is larger than the length of shape in input_x, fill in multiple 1 in front of the shape in input_x until their lengths are consistent. Such as set the shape of input_x as \((1, ..., x_1, x_2, ..., x_S)\), then the shape of their corresponding positions can be multiplied, and the shape of Outputs is \((1*y_1, ..., x_S*y_R)\).

Examples

>>> tile = P.Tile()
>>> input_x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.float32)
>>> multiples = (2, 3)
>>> result = tile(input_x, multiples)
[[1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]
 [1.  2.  1.  2.  1.  2.]
 [3.  4.  3.  4.  3.  4.]]
class mindspore.ops.operations.TopK(*args, **kwargs)[source]

Finds values and indices of the k largest entries along the last dimension.

Parameters

sorted (bool) – If true, the resulting elements will be sorted by the values in descending order. Default: False.

Inputs:
  • input_x (Tensor) - Input to be computed.

  • k (int) - Number of top elements to be computed along the last dimension, constant input is needed.

Outputs:

Tuple of 2 Tensor, the values and the indices.

  • values (Tensor) - The k largest elements along each last dimensional slice.

  • indices (Tensor) - The indices of values within the last dimension of input.

Examples

>>> topk = P.TopK(sorted=True)
>>> input_x = Tensor([1, 2, 3, 4, 5], mindspore.float16)
>>> k = 3
>>> values, indices = topk(input_x, k)
>>> assert values == Tensor(np.array([5, 4, 3]), mstype.float16)
>>> assert indices == Tensor(np.array([4, 3, 2]), mstype.int32)
class mindspore.ops.operations.Transpose(*args, **kwargs)[source]

Permutes the dimensions of input tensor according to input perm.

Inputs:
  • input_x (Tensor) - The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • input_perm (tuple[int]) - The permutation to be converted. The input tuple is constructed by multiple indexes. The length of input_perm and the shape of input_x should be the same. Only constant value is allowed.

Outputs:

Tensor, the type of output tensor is same as input_x and the shape of output tensor is decided by the shape of input_x and the value of input_perm.

Examples

>>> input_tensor = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
>>> perm = (0, 2, 1)
>>> transpose = P.Transpose()
>>> output = transpose(input_tensor, perm)
class mindspore.ops.operations.TruncatedNormal(*args, **kwargs)[source]

Returns a tensor of the specified shape filled with truncated normal values.

The generated values follow a normal distribution.

Parameters
  • seed (int) – A int number used to create random seed. Default: 0.

  • dtype (mindspore.dtype) – Data type. Default: mindspore.float32.

Inputs:
  • shape (tuple[int]) - Shape of output tensor, is a tuple of positive int.

Outputs:

Tensor, type of output tensor is same as attribute dtype.

Examples

>>> shape = (1, 2, 3)
>>> truncated_normal = P.TruncatedNormal()
>>> output = truncated_normal(shape)
class mindspore.ops.operations.TupleToArray(*args, **kwargs)[source]

Converts a tuple to tensor.

If the first number type of tuple is int, the output tensor type is int. Else, the output tensor type is float.

Inputs:
  • input_x (tuple) - A tuple of numbers. These numbers have the same type. Only constant value is allowed.

Outputs:

Tensor, if the input tuple contain N numbers, then the output tensor shape is (N,).

Examples

>>> type = P.TupleToArray()((1,2,3))
class mindspore.ops.operations.Unpack(*args, **kwargs)[source]

Unpacks tensor in specified axis.

Unpacks a tensor of rank R along axis dimension, output tensors will have rank (R-1).

Given a tensor of shape \((x_1, x_2, ..., x_R)\). If \(0 \le axis\), the shape of tensor in output is \((x_1, x_2, ..., x_{axis}, x_{axis+2}, ..., x_R)\).

This is the opposite of pack.

Parameters

axis (int) – Dimension along which to pack. Default: 0. Negative values wrap around. The range is [-R, R).

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\). A rank R > 0 Tensor to be unpacked.

Outputs:

A tuple of Tensors, the shape of each objects is same.

Raises

ValueError – If axis is out of the range [-len(input_x.shape()), len(input_x.shape())).

Examples

>>> unpack = P.Unpack()
>>> input_x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]))
>>> output = unpack(input_x)
([1, 1, 1, 1], [2, 2, 2, 2])
class mindspore.ops.operations.UnsortedSegmentMin(*args, **kwargs)[source]

Computes the minimum along segments of a tensor.

If the given segment_ids is negative, the value will be ignored.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\).

  • segment_ids (Tensor) - A 1-D tensor whose shape is \((x_1)\).

  • num_segments (int) - The value spcifies the number of distinct segment_ids.

Outputs:

Tensor, Set the number of num_segments as N, the shape is \((N, x_2, ..., x_R)\).

Examples

>>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
>>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
>>> num_segments = 2
>>> unsorted_segment_min = P.UnsortedSegmentMin()
>>> unsorted_segment_min(input_x, segment_ids, num_segments)
[[1., 2., 3.], [4., 2., 1.]]
class mindspore.ops.operations.UnsortedSegmentSum(*args, **kwargs)[source]

Computes the sum along segments of a tensor.

Calculates a tensor such that \(\text{output}[i] = \sum_{segment\_ids[j] == i} \text{data}[j, \ldots]\), where \(j\) is a tuple describing the index of element in data. segment_ids selects which elements in data to sum up. Segment_ids does not need to be sorted, and it does not need to cover all values in the entire valid value range.

If the sum of the given segment_ids \(i\) is empty, then \(\text{output}[i] = 0\). If the given segment_ids is negative, the value will be ignored. ‘num_segments’ should be equal to the number of different segment_ids.

Inputs:
  • input_x (Tensor) - The shape is \((x_1, x_2, ..., x_R)\).

  • segment_ids (Tensor) - Set the shape as \((x_1, x_2, ..., x_N)\), where 0 < N <= R. Type must be int.

  • num_segments (int) - Set \(z\) as num_segments.

Outputs:

Tensor, the shape is \((z, x_{N+1}, ..., x_R)\).

Examples

>>> input_x = Tensor([1, 2, 3, 4], mindspore.float)
>>> segment_ids = Tensor([0, 0, 1, 2], mindspore.int32)
>>> num_segments = 4
>>> P.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
[3, 3, 4, 0]
class mindspore.ops.operations.ZerosLike(*args, **kwargs)[source]

Creates a new tensor. All elements value are 0.

Returns a tensor of zeros with the same shape and type as the input tensor.

Inputs:
  • input_x (Tensor) - Input tensor.

Outputs:

Tensor, has the same shape and type as input_x but filled with zeros.

Examples

>>> zeroslike = P.ZerosLike()
>>> x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
>>> output = zeroslike(x)