mindspore.ops
Operators can be used in the construct function of Cell.
Examples
>>> import mindspore.ops as ops
Compared with the previous version, the added, deleted and supported platforms change information of mindspore.ops operators in MindSpore, please refer to the link https://gitee.com/mindspore/docs/blob/r1.6/resource/api_updates/ops_api_updates.md.
operations
The Primitive operators in operations need to be instantiated before being used.
Neural Network Operators
| API Name | Description | Supported Platforms | 
| Computes inverse hyperbolic cosine of the inputs element-wise. | 
 | |
| Updates gradients by the Adaptive Moment Estimation (Adam) algorithm. | 
 | |
| Updates gradients by the Adaptive Moment Estimation (Adam) algorithm. | 
 | |
| Updates gradients by the Adaptive Moment Estimation algorithm with weight decay (AdamWeightDecay). | 
 | |
| AdaptiveAvgPool2D operation. | 
 | |
| Updates relevant entries according to the adadelta scheme. | 
 | |
| Updates relevant entries according to the adagrad scheme. | 
 | |
| Update var according to the proximal adagrad scheme. | 
 | |
| Updates relevant entries according to the adagradv2 scheme. | 
 | |
| Updates relevant entries according to the adamax scheme. | 
 | |
| Updates relevant entries according to the AddSign algorithm. | 
 | |
| Optimizer that implements the centered RMSProp algorithm. | 
 | |
| Updates var by subtracting alpha * delta from it. | 
 | |
| Optimizer that implements the Momentum algorithm. | 
 | |
| Updates relevant entries according to the AddSign algorithm. | 
 | |
| Updates relevant entries according to the proximal adagrad algorithm. | 
 | |
| Updates relevant entries according to the FOBOS(Forward Backward Splitting) algorithm. | 
 | |
| Optimizer that implements the Root Mean Square prop(RMSProp) algorithm. | 
 | |
| Average pooling operation. | 
 | |
| 3D Average pooling operation. | 
 | |
| It's similar to operator  | Deprecated | |
| Batch Normalization for input data and updated parameters. | 
 | |
| Adds sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the label. | 
 | |
| Returns sum of input and bias tensor. | 
 | |
| Computes the binary cross entropy between the logits and the labels. | 
 | |
| Compute accidental hits of sampled classes which match target classes. | 
 | |
| 2D convolution layer. | 
 | |
| The Conv2DBackpropInput interface is deprecated, please refer to  | Deprecated | |
| Compute a 2D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution). | 
 | |
| 3D convolution layer. | 
 | |
| Computes a 3D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution). | 
 | |
| Performs greedy decoding on the logits given in inputs. | 
 | |
| Calculates the CTC (Connectionist Temporal Classification) loss and the gradient. | 
 | |
| Returns the dimension index in the destination data format given in the source data format. | 
 | |
| DepthwiseConv2dNative will be deprecated in the future. | Deprecated | |
| During training, randomly zeroes some of the elements of the input tensor with probability 1-keep_prob from a Bernoulli distribution. | 
 | |
| During training, randomly zeroes some of the channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution(For a 4-dimensional tensor with a shape of NCHW, the channel feature map refers to a 2-dimensional feature map with the shape of HW). | 
 | |
| During training, randomly zeroes some of the channels of the input tensor with probability 1-keep_prob from a Bernoulli distribution(For a 5-dimensional tensor with a shape of NCDHW, the channel feature map refers to a 3-dimensional feature map with a shape of DHW). | 
 | |
| The DropoutDoMask interface is deprecated, please use the  | Deprecated | |
| The DropoutGenMask interface is deprecated, please use the  | Deprecated | |
| Applies a single-layer gated recurrent unit (GRU) to an input sequence. | 
 | |
| Applies a recurrent neural network to the input. | 
 | |
| Computes exponential linear: | 
 | |
| Fast Gaussian Error Linear Units activation function. | 
 | |
| Flattens a tensor without changing its batch size on the 0-th axis. | 
 | |
| Computes the remainder of division element-wise. | 
 | |
| Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam) algorithm. | 
 | |
| Merges the duplicate value of the gradient and then updates parameters by the Adaptive Moment Estimation (Adam) algorithm. | 
 | |
| Merges the duplicate value of the gradient and then updates relevant entries according to the proximal adagrad algorithm. | 
 | |
| Gaussian Error Linear Units activation function. | 
 | |
| Returns the next element in the dataset queue. | 
 | |
| Applies the hard shrinkage function element-wise, each element complies with the following function: | 
 | |
| Hard sigmoid activation function. | 
 | |
| Hard swish activation function. | 
 | |
| Computes the Kullback-Leibler divergence between the logits and the labels. | 
 | |
| Calculates half of the L2 norm of a tensor without sqrt. | 
 | |
| L2 Normalization Operator. | 
 | |
| Conducts LARS (layer-wise adaptive rate scaling) update on the sum of squares of gradient. | 
 | |
| Applies the Layer Normalization to the input tensor. | 
 | |
| Log Softmax activation function. | 
 | |
| Local Response Normalization. | 
 | |
| Performs the Long Short-Term Memory (LSTM) on the input. | 
 | |
| Max pooling operation. | 
 | |
| 3D max pooling operation. | 
 | |
| Performs max pooling on the input Tensor and returns both max values and indices. | 
 | |
| Pads the input tensor according to the paddings and mode. | 
 | |
| Computes MISH(A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise. | 
 | |
| Gets the negative log likelihood loss between logits and labels. | 
 | |
| Computes a one-hot tensor. | 
 | |
| Pads the input tensor according to the paddings. | 
 | |
| Parametric Rectified Linear Unit activation function. | 
 | |
| Computes ReLU (Rectified Linear Unit activation function) of input tensors element-wise. | 
 | |
| Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise. | 
 | |
| Rectified Linear Unit activation function. | 
 | |
| Resizes an image to a certain size using the bilinear interpolation. | 
 | |
| Computes the RNNTLoss and its gradient with respect to the softmax outputs. | 
 | |
| Computes the Region of Interest (RoI) Align operator. | 
 | |
| Computes SeLU (scaled exponential Linear Unit) of input tensors element-wise. | 
 | |
| Computes the stochastic gradient descent. | 
 | |
| Sigmoid activation function. | 
 | |
| Uses the given logits to compute sigmoid cross entropy between the logits and the label. | 
 | |
| Computes smooth L1 loss, a robust L1 loss. | 
 | |
| SoftMarginLoss operation. | 
 | |
| Softmax operation. | 
 | |
| Gets the softmax cross-entropy value between logits and labels with one-hot encoding. | 
 | |
| Softplus activation function. | 
 | |
| Applies the SoftShrink function element-wise. | 
 | |
| Softsign activation function. | 
 | |
| Updates relevant entries according to the adagrad scheme. | 
 | |
| Updates relevant entries according to the adagrad scheme, one more epsilon attribute than SparseApplyAdagrad. | 
 | |
| Updates relevant entries according to the proximal adagrad algorithm. | 
 | |
| Computes the softmax cross-entropy value between logits and sparse encoding labels. | 
 | |
| Stacks a list of tensors in specified axis. | 
 | |
| Tanh activation function. | 
 | |
| Finds values and indices of the k largest entries along the last dimension. | 
 | |
| Unstacks tensor in specified axis. | 
 | 
Math Operators
| API Name | Description | Supported Platforms | 
| Returns absolute value of a tensor element-wise. | 
 | |
| Computes accumulation of all input tensors element-wise. | 
 | |
| Computes arccosine of input tensors element-wise. | 
 | |
| Adds two input tensors element-wise. | 
 | |
| Computes addition of all input tensors element-wise. | 
 | |
| Returns True if abs(x-y) is smaller than tolerance element-wise, otherwise False. | 
 | |
| Computes arcsine of input tensors element-wise. | 
 | |
| Computes inverse hyperbolic sine of the input element-wise. | 
 | |
| Updates a Parameter by adding a value to it. | 
 | |
| Updates a Parameter by subtracting a value from it. | 
 | |
| Computes the trigonometric inverse tangent of the input element-wise. | 
 | |
| Returns arctangent of x/y element-wise. | 
 | |
| Computes inverse hyperbolic tangent of the input element-wise. | 
 | |
| Computes matrix multiplication between two tensors by batch. | 
 | |
| Computes BesselI0e of input element-wise. | 
 | |
| Computes BesselI1e of input element-wise. | 
 | |
| Returns bitwise and of two tensors element-wise. | 
 | |
| Returns bitwise or of two tensors element-wise. | 
 | |
| Returns bitwise xor of two tensors element-wise. | 
 | |
| Computes batched the p-norm distance between each pair of the two collections of row vectors. | 
 | |
| Rounds a tensor up to the closest integer element-wise. | 
 | |
| Computes cosine of input element-wise. | 
 | |
| Returns a tensor of complex numbers that are the complex conjugate of each element in input. | 
 | |
| Computes hyperbolic cosine of input element-wise. | 
 | |
| Computes the cumulative product of the tensor x along axis. | 
 | |
| Computes the cumulative sum of input tensor along axis. | 
 | |
| Computes the quotient of dividing the first input tensor by the second input tensor element-wise. | 
 | |
| Computes a safe divide and returns 0 if the y is zero. | 
 | |
| Creates a tensor filled with minimum value in x dtype. | 
 | |
| Computes the equivalence between two tensors element-wise. | 
 | |
| Computes the number of the same elements of two tensors. | 
 | |
| Computes the Gauss error function of x element-wise. | 
 | |
| Computes the complementary error function of x element-wise. | 
 | |
| Computes the inverse error function of input. | 
 | |
| Returns exponential of a tensor element-wise. | 
 | |
| Returns exponential then minus 1 of a tensor element-wise. | 
 | |
| Determines if the elements contain Not a Number(NaN), infinite or negative infinite. | 
 | |
| Rounds a tensor down to the closest integer element-wise. | 
 | |
| Divides the first input tensor by the second input tensor element-wise and round down to the closest integer. | 
 | |
| Ger product of x1 and x2. | 
 | |
| Computes the boolean value of \(x > y\) element-wise. | 
 | |
| Computes the boolean value of \(x >= y\) element-wise. | 
 | |
| Returns a rank 1 histogram counting the number of entries in values that fall into every bin. | 
 | |
| Returns a new tensor containing imaginary value of the input. | 
 | |
| Adds tensor y to specified axis and indices of tensor x. | 
 | |
| Adds v into specified rows of x. | 
 | |
| Subtracts v into specified rows of x. | 
 | |
| Computes Reciprocal of input tensor element-wise. | 
 | |
| Flips all bits of input tensor element-wise. | 
 | |
| Determines which elements are inf or -inf for each position | 
 | |
| Determines which elements are NaN for each position. | 
 | |
| Does a linear interpolation of two tensors start and end based on a float or tensor weight. | 
 | |
| Computes the boolean value of \(x < y\) element-wise. | 
 | |
| Computes the boolean value of \(x <= y\) element-wise. | 
 | |
| Returns a Tensor whose value is num evenly spaced in the interval start and stop (including start and stop), and the length of the output Tensor is num. | 
 | |
| Returns the natural logarithm of a tensor element-wise. | 
 | |
| Returns the natural logarithm of one plus the input tensor element-wise. | 
 | |
| Computes the "logical AND" of two tensors element-wise. | 
 | |
| Computes the "logical NOT" of a tensor element-wise. | 
 | |
| Computes the "logical OR" of two tensors element-wise. | 
 | |
| Returns the matrix norm or vector norm of a given tensor. | 
 | |
| Multiplies matrix a and matrix b. | 
 | |
| Returns the inverse of the input matrix. | 
 | |
| Computes the maximum of input tensors element-wise. | 
 | |
| Computes the minimum of input tensors element-wise. | 
 | |
| Computes the remainder of dividing the first input tensor by the second input tensor element-wise. | 
 | |
| Multiplies two tensors element-wise. | 
 | |
| Computes x * y element-wise. | 
 | |
| Returns a tensor with negative values of the input tensor element-wise. | 
 | |
| When object detection problem is performed in the computer vision field, object detection algorithm generates a plurality of bounding boxes. | 
 | |
| Computes the non-equivalence of two tensors element-wise. | 
 | |
| Allocates a flag to store the overflow status. | 
 | |
| Clears the flag which stores the overflow status. | 
 | |
| Updates the flag which is the output tensor of NPUAllocFloatStatus with the latest overflow status. | 
 | |
| Computes a tensor to the power of the second input. | 
 | |
| Returns a Tensor that is the real part of the input. | 
 | |
| Divides the first input tensor by the second input tensor in floating-point type element-wise. | 
 | |
| Returns reciprocal of a tensor element-wise. | 
 | |
| Reduces a dimension of a tensor by the "logicalAND" of all elements in the dimension, by default. | 
 | |
| Reduces a dimension of a tensor by the "logical OR" of all elements in the dimension, by default. | 
 | |
| Reduces a dimension of a tensor by the maximum value in this dimension, by default. | 
 | |
| Reduces a dimension of a tensor by averaging all elements in the dimension, by default. | 
 | |
| Reduces a dimension of a tensor by the minimum value in the dimension, by default. | 
 | |
| Reduces a dimension of a tensor by multiplying all elements in the dimension, by default. | 
 | |
| Reduces a dimension of a tensor by summing all elements in the dimension, by default. | 
 | |
| Returns half to even of a tensor element-wise. | 
 | |
| Computes reciprocal of square root of input tensor element-wise. | 
 | |
| Performs sign on the tensor element-wise. | 
 | |
| Computes sine of the input element-wise. | 
 | |
| Computes hyperbolic sine of the input element-wise. | 
 | |
| Returns square root of a tensor element-wise. | 
 | |
| Returns square of a tensor element-wise. | 
 | |
| Subtracts the second input tensor from the first input tensor element-wise and returns square of it. | 
 | |
| Returns the square sum of a tensor element-wise | 
 | |
| Subtracts the second input tensor from the first input tensor element-wise. | 
 | |
| Computes tangent of x element-wise. | 
 | |
| Divides the first input tensor by the second input tensor element-wise for integer types, negative numbers will round fractional quantities towards zero. | 
 | |
| Returns the remainder of division element-wise. | 
 | |
| Divides the first input tensor by the second input tensor element-wise. | 
 | |
| Computes the first input tensor multiplied by the logarithm of second input tensor element-wise. | 
 | 
Array Operators
| API Name | Description | Supported Platforms | 
| Updates relevant entries according to the FTRL scheme. | 
 | |
| Returns the indices of the maximum value of a tensor across the axis. | 
 | |
| Calculates the maximum value with the corresponding index. | 
 | |
| Returns the indices of the minimum value of a tensor across the axis. | 
 | |
| Calculates the minimum value with corresponding index, and returns indices and values. | 
 | |
| Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions. | 
 | |
| Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions. | 
 | |
| Broadcasts input tensor to a given shape. | 
 | |
| Returns a tensor with the new specified data type. | 
 | |
| Connect tensor in the specified axis. | 
 | |
| Rearrange blocks of depth data into spatial dimensions. | 
 | |
| Returns the data type of the input tensor as mindspore.dtype. | 
 | |
| Returns the shape of the input tensor. | 
 | |
| Computes the Levenshtein Edit Distance. | 
 | |
| Returns a slice of input tensor based on the specified indices. | 
 | |
| Adds an additional dimension to input_x at the given axis. | 
 | |
| Extract patches from input and put them in the "depth" output dimension. | 
 | |
| Creates a tensor with ones on the diagonal and zeros in the rest. | 
 | |
| Creates a tensor filled with a scalar value. | 
 | |
| Merges the duplicate value of the gradient and then updates relevant entries according to the FTRL-proximal scheme. | 
 | |
| Returns a slice of the input tensor based on the specified indices and axis. | 
 | |
| Gathers values along an axis specified by dim. | 
 | |
| Gathers slices from a tensor by indices. | 
 | |
| Returns a Tensor with the same shape and contents as input. | 
 | |
| Updates specified rows with values in v. | 
 | |
| Computes the inverse of an index permutation. | 
 | |
| Determines which elements are finite for each position. | 
 | |
| Checks whether an object is an instance of a target type. | 
 | |
| Checks whether this type is a sub-class of another type. | 
 | |
| Fills elements of self tensor with value where mask is True. | 
 | |
| Returns a new 1-D Tensor which indexes the input tensor according to the boolean mask. | 
 | |
| Generates coordinate matrices from given coordinate tensors. | 
 | |
| Creates a tensor filled with value ones. | 
 | |
| Creates a new tensor. | 
 | |
| Extends the last dimension of the input tensor from 1 to pad_dim_size, by filling with 0. | 
 | |
| Concats tensor in the first dimension. | 
 | |
| Generates n random samples from 0 to n-1 without repeating. | 
 | |
| Returns the rank of a tensor. | 
 | |
| Reshapes the input tensor with the same values based on a given shape tuple. | 
 | |
| Resizes the input tensor by using the nearest neighbor algorithm. | 
 | |
| Reverses variable length slices. | 
 | |
| Reverses specific dimensions of a tensor. | 
 | |
| Returns an integer that is closest to x element-wise. | 
 | |
| Checks whether the data type and shape of two tensors are the same. | 
 | |
| Casts the input scalar to another type. | 
 | |
| Converts a scalar to a Tensor. | 
 | |
| Converts a scalar to a Tensor, and converts the data type to the specified type. | 
 | |
| Updates the value of the input tensor through the addition operation. | 
 | |
| Updates the value of the input tensor through the divide operation. | 
 | |
| Updates the value of the input tensor through the maximum operation. | 
 | |
| Updates the value of the input tensor through the minimum operation. | 
 | |
| Updates the value of the input tensor through the multiply operation. | 
 | |
| Scatters a tensor into a new tensor depending on the specified indices. | 
 | |
| Applies sparse addition to individual values or slices in a tensor. | 
 | |
| Applies sparse subtraction to individual values or slices in a tensor. | 
 | |
| Updates tensor values by using input indices and value. | 
 | |
| Applies sparse addition to the input using individual values or slices. | 
 | |
| Updates the value of the input tensor through the subtraction operation. | 
 | |
| Updates tensor values by using input indices and value. | 
 | |
| Returns the selected elements, either from input \(x\) or input \(y\), depending on the condition. | 
 | |
| Returns the shape of the input tensor. | 
 | |
| Returns the size of a Tensor. | 
 | |
| Slices a tensor in the specified shape. | 
 | |
| Sorts the elements of the input tensor along a given dimension in ascending order by value. | 
 | |
| SpaceToBatch is deprecated. | Deprecated | |
| Divides spatial dimensions into blocks and combines the block size with the original batch. | 
 | |
| Rearrange blocks of spatial data into depth. | 
 | |
| Updates relevant entries according to the FTRL-proximal scheme. | 
 | |
| Updates relevant entries according to the FTRL-proximal scheme. | 
 | |
| Returns a slice of input tensor based on the specified indices and axis. | 
 | |
| Splits the input tensor into output_num of tensors along the given axis and output numbers. | 
 | |
| Splits the input tensor into num_split tensors along the given dimension. | 
 | |
| Returns a tensor with the same data type but dimensions of 1 are removed based on axis. | 
 | |
| Extracts a strided slice of a tensor. | 
 | |
| Creates a new tensor by adding the values from the positions in input_x indicated by indices, with values from updates. | 
 | |
| By comparing the value at the position indicated by the index in input_x with the value in the update, the value at the index will eventually be equal to the largest one to create a new tensor. | 
 | |
| By comparing the value at the position indicated by the index in input_x with the value in the updates, the value at the index will eventually be equal to the smallest one to create a new tensor. | 
 | |
| Creates a new tensor by subtracting the values from the positions in input_x indicated by indices, with values from updates. | 
 | |
| Creates a new tensor by updating the positions in input_x indicated by indices, with values from update. | 
 | |
| Replicates a tensor with given multiples times. | 
 | |
| Permutes the dimensions of the input tensor according to input permutation. | 
 | |
| Converts a tuple to a tensor. | 
 | |
| Returns the unique elements of input tensor and also return a tensor containing the index of each value of input tensor corresponding to the output unique tensor. | 
 | |
| Returns unique elements and relative indexes in 1-D tensor, filled with padding num. | 
 | |
| Computes the maximum along segments of a tensor. | 
 | |
| Computes the minimum of a tensor along segments. | 
 | |
| Computes the product of a tensor along segments. | 
 | |
| Computes the sum of a tensor along segments. | 
 | |
| Creates a tensor filled with value zeros. | 
 | |
| Creates a new tensor. | 
 | 
Communication Operators
Note that the APIs in the following list need to preset communication environment variables. For the Ascend devices, users need to prepare the rank table, set rank_id and device_id. Please see the Ascend tutorial for more details. For the GPU device, users need to prepare the host file and mpi, please see the GPU tutorial .
| API Name | Description | Supported Platforms | 
| Gathers tensors from the specified communication group. | 
 | |
| Reduces the tensor data across all devices in such a way that all devices will get the same final result. | 
 | |
| AlltoAll is a collective operation. | 
 | |
| Broadcasts the tensor to the whole group. | 
 | |
| NeighborExchange is a collective operation. | 
 | |
| NeighborExchangeV2 is a collective operation. | 
 | |
| Operation options for reducing tensors. | 
 | |
| Reduces and scatters tensors from the specified communication group. | 
 | 
Debug Operators
| API Name | Description | Supported Platforms | 
| Outputs the tensor to protocol buffer through histogram summary operator. | 
 | |
| This operation is used as a tag to hook gradient in intermediate variables. | 
 | |
| Outputs the image tensor to protocol buffer through image summary operator. | 
 | |
| Attaches callback to the graph node that will be invoked on the node's gradient. | 
 | |
| Outputs the tensor or string to stdout. | 
 | |
| Outputs a scalar to a protocol buffer through a scalar summary operator. | 
 | |
| Outputs a tensor to a protocol buffer through a tensor summary operator. | 
 | 
Random Operators
| API Name | Description | Supported Platforms | 
| Produces random positive floating-point values x, distributed according to probability density function: | 
 | |
| Generates random labels with a log-uniform distribution for sampled_candidates. | 
 | |
| Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of tensor input. | 
 | |
| Produces random non-negative integer values i, distributed according to discrete probability function: | 
 | |
| Generates random samples from a given categorical distribution tensor. | 
 | |
| Generates a random sample as index tensor with a mask tensor from a given tensor. | 
 | |
| Generates random numbers according to the Laplace random number distribution (mean=0, lambda=1). | 
 | |
| Generates random numbers according to the standard Normal (or Gaussian) random number distribution. | 
 | |
| Uniform candidate sampler. | 
 | |
| Produces random integer values i, uniformly distributed on the closed interval [minval, maxval), that is, distributed according to the discrete probability function: | 
 | |
| Produces random floating-point values i, uniformly distributed to the interval [0, 1). | 
 | 
Image Operators
| API Name | Description | Supported Platforms | 
| Extracts crops from the input image tensor and resizes them. | 
 | 
Sparse Operators
| API Name | Description | Supported Platforms | 
| Converts a sparse representation into a dense tensor. | 
 | |
| Multiplies sparse matrix A by dense matrix B. | 
 | 
Custom Operators
| API Name | Description | Supported Platforms | 
| Custom primitive is used for user defined operators and is to enhance the expressive ability of built-in primitives. | 
 | 
Other Operators
| API Name | Description | Supported Platforms | 
| Assigns Parameter with a value. | 
 | |
| Decodes bounding boxes locations. | 
 | |
| Encodes bounding boxes locations. | 
 | |
| Checks whether the data type and the shape of corresponding elements from tuples x and y are the same. | 
 | |
| Checks bounding box. | 
 | |
| Depend is used for processing dependency operations. | 
 | |
| Determines whether the targets are in the top k predictions. | 
 | |
| Calculates intersection over union for boxes. | 
 | |
| Updates log_probs with repeat n-grams. | 
 | |
| Makes a partial function instance. | 
 | |
| Calculates population count. | 
 | 
composite
The composite operators are the pre-defined combination of operators.
| API Name | Description | Supported Platforms | 
| Computation of batch dot product between samples in two tensors containing batch dims. | 
 | |
| Clips tensor values by the ratio of the sum of their norms. | 
 | |
| Clips tensor values to a specified min and max. | 
 | |
| A decorator that adds a flag to the function. | 
 | |
| Count number of nonzero elements across axis of input tensor | 
 | |
| Computation of the cumulative minimum of elements of 'x' in the dimension axis, and the index location of each maximum value found in the dimension 'axis'. | 
 | |
| Computation a dot product between samples in two tensors. | 
 | |
| Generates random numbers according to the Gamma random number distribution. | 
 | |
| A higher-order function which is used to generate the gradient function for the input function. | 
 | |
| Hypermap will apply the set operation to input sequences. | 
 | |
| Generates random numbers according to the Laplace random number distribution. | 
 | |
| Map will apply the set operation on input sequences. | 
 | |
| Returns the matrix product of two arrays. | 
 | |
| Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor. | 
 | |
| Generates overloaded functions. | 
 | |
| Generates random numbers according to the Normal (or Gaussian) random number distribution. | 
 | |
| Generates random numbers according to the Poisson random number distribution. | 
 | |
| Repeat elements of a tensor along an axis, like np.repeat. | 
 | |
| Returns a mask tensor representing the first N positions of each cell. | 
 | |
| Computation of Tensor contraction on arbitrary axes between tensors a and b. | 
 | |
| Generates random numbers according to the Uniform random number distribution. | 
 | 
functional
The functional operators are the pre-instantiated Primitive operators, which can be used directly as a function. The use cases of some functional operators are as follows:
from mindspore import Tensor, ops
from mindspore import dtype as mstype
input_x = Tensor(-1, mstype.int32)
input_dict = {'x':1, 'y':2}
result_abs = ops.absolute(input_x)
print(result_abs)
result_in_dict = ops.in_dict('x', input_dict)
print(result_in_dict)
result_not_in_dict = ops.not_in_dict('x', input_dict)
print(result_not_in_dict)
result_isconstant = ops.isconstant(input_x)
print(result_isconstant)
result_typeof = ops.typeof(input_x)
print(result_typeof)
# outputs:
# 1
# True
# False
# True
# Tensor[Int32]
| functional | Description | 
|---|---|
| mindspore.ops.absolute | Refer to  | 
| mindspore.ops.acos | Refer to  | 
| mindspore.ops.acosh | Refer to  | 
| mindspore.ops.add | Refer to  | 
| mindspore.ops.addn | Refer to  | 
| mindspore.ops.asin | Refer to  | 
| mindspore.ops.asinh | Refer to  | 
| mindspore.ops.assign | Refer to  | 
| mindspore.ops.assign_add | Refer to  | 
| mindspore.ops.assign_sub | Refer to  | 
| mindspore.ops.atan | Refer to  | 
| mindspore.ops.atan2 | Refer to  | 
| mindspore.ops.atanh | Refer to  | 
| mindspore.ops.bitwise_and | Refer to  | 
| mindspore.ops.bitwise_or | Refer to  | 
| mindspore.ops.bitwise_xor | Refer to  | 
| mindspore.ops.bool_and | Calculate the result of logical AND operation. (Usage is the same as “and” in Python) | 
| mindspore.ops.bool_eq | Determine whether the Boolean values are equal. (Usage is the same as “==” in Python) | 
| mindspore.ops.bool_not | Calculate the result of logical NOT operation. (Usage is the same as “not” in Python) | 
| mindspore.ops.bool_or | Calculate the result of logical OR operation. (Usage is the same as “or” in Python) | 
| mindspore.ops.cast | Refer to  | 
| mindspore.ops.cos | Refer to  | 
| mindspore.ops.cosh | Refer to  | 
| mindspore.ops.cumprod | Refer to  | 
| mindspore.ops.cumsum | Refer to  | 
| mindspore.ops.div | Refer to  | 
| mindspore.ops.depend | Refer to  | 
| mindspore.ops.dtype | Refer to  | 
| mindspore.ops.erf | Refer to  | 
| mindspore.ops.erfc | Refer to  | 
| mindspore.ops.eye | Refer to  | 
| mindspore.ops.equal | Refer to  | 
| mindspore.ops.expand_dims | Refer to  | 
| mindspore.ops.exp | Refer to  | 
| mindspore.ops.fill | Refer to  | 
| mindspore.ops.floor | Refer to  | 
| mindspore.ops.floordiv | Refer to  | 
| mindspore.ops.floormod | Refer to  | 
| mindspore.ops.gather | Refer to  | 
| mindspore.ops.gather_d | Refer to  | 
| mindspore.ops.gather_nd | Refer to  | 
| mindspore.ops.ge | Refer to  | 
| mindspore.ops.gt | Refer to  | 
| mindspore.ops.invert | Refer to  | 
| mindspore.ops.in_dict | Determine if a str in dict. | 
| mindspore.ops.is_not | Determine whether the input is not the same as the other one. (Usage is the same as “is not” in Python) | 
| mindspore.ops.is_ | Determine whether the input is the same as the other one. (Usage is the same as “is” in Python) | 
| mindspore.ops.isconstant | Determine whether the object is constant. | 
| mindspore.ops.isfinite | Refer to  | 
| mindspore.ops.isinstance_ | Refer to  | 
| mindspore.ops.isnan | Refer to  | 
| mindspore.ops.issubclass_ | Refer to  | 
| mindspore.ops.log | Refer to  | 
| mindspore.ops.logical_and | Refer to  | 
| mindspore.ops.le | Refer to  | 
| mindspore.ops.less | Refer to  | 
| mindspore.ops.logical_and | Refer to  | 
| mindspore.ops.logical_not | Refer to  | 
| mindspore.ops.logical_or | Refer to  | 
| mindspore.ops.maximum | Refer to  | 
| mindspore.ops.minimum | Refer to  | 
| mindspore.ops.mul | Refer to  | 
| mindspore.ops.neg_tensor | Refer to  | 
| mindspore.ops.not_equal | Refer to  | 
| mindspore.ops.not_in_dict | Determine whether the object is not in the dict. | 
| mindspore.ops.ones_like | Refer to  | 
| mindspore.ops.partial | Refer to  | 
| mindspore.ops.pows | Refer to  | 
| mindspore.ops.print_ | Refer to  | 
| mindspore.ops.rank | Refer to  | 
| mindspore.ops.reduce_max | Refer to  | 
| mindspore.ops.reduce_mean | Refer to  | 
| mindspore.ops.reduce_min | Refer to  | 
| mindspore.ops.reduce_prod | Refer to  | 
| mindspore.ops.reduce_sum | Refer to  | 
| mindspore.ops.reshape | Refer to  | 
| mindspore.ops.same_type_shape | Refer to  | 
| mindspore.ops.scalar_add | Get the sum of two numbers. (Usage is the same as “+” in Python) | 
| mindspore.ops.scalar_cast | Refer to  | 
| mindspore.ops.check_bprop | Refer to  | 
| mindspore.ops.scalar_div | Get the quotient of dividing the first input number by the second input number. (Usage is the same as “/” in Python) | 
| mindspore.ops.scalar_eq | Determine whether two numbers are equal. (Usage is the same as “==” in Python) | 
| mindspore.ops.scalar_floordiv | Divide the first input number by the second input number and round down to the closest integer. (Usage is the same as “//” in Python) | 
| mindspore.ops.scalar_ge | Determine whether the number is greater than or equal to another number. (Usage is the same as “>=” in Python) | 
| mindspore.ops.scalar_gt | Determine whether the number is greater than another number. (Usage is the same as “>” in Python) | 
| mindspore.ops.scalar_le | Determine whether the number is less than or equal to another number. (Usage is the same as “<=” in Python) | 
| mindspore.ops.scalar_log | Get the natural logarithm of the input number. | 
| mindspore.ops.scalar_lt | Determine whether the number is less than another number. (Usage is the same as “<” in Python) | 
| mindspore.ops.scalar_mod | Get the remainder of dividing the first input number by the second input number. (Usage is the same as “%” in Python) | 
| mindspore.ops.scalar_mul | Get the product of the input two numbers. (Usage is the same as “*” in Python) | 
| mindspore.ops.scalar_ne | Determine whether two numbers are not equal. (Usage is the same as “!=” in Python) | 
| mindspore.ops.scalar_pow | Compute a number to the power of the second input number. | 
| mindspore.ops.scalar_sub | Subtract the second input number from the first input number. (Usage is the same as “-” in Python) | 
| mindspore.ops.scalar_to_array | Refer to  | 
| mindspore.ops.scalar_to_tensor | Refer to  | 
| mindspore.ops.scalar_uadd | Get the positive value of the input number. | 
| mindspore.ops.scalar_usub | Get the negative value of the input number. | 
| mindspore.ops.scatter_nd | Refer to  | 
| mindspore.ops.scatter_nd_update | Refer to  | 
| mindspore.ops.scatter_update | Refer to  | 
| mindspore.ops.shape | Refer to  | 
| mindspore.ops.shape_mul | The input of shape_mul must be shape multiply elements in tuple(shape). | 
| mindspore.ops.sin | Refer to  | 
| mindspore.ops.sinh | Refer to  | 
| mindspore.ops.size | Refer to  | 
| mindspore.ops.sort | Refer to  | 
| mindspore.ops.sqrt | Refer to  | 
| mindspore.ops.square | Refer to  | 
| mindspore.ops.squeeze | Refer to  | 
| mindspore.ops.stack | Refer to  | 
| mindspore.ops.stop_gradient | Disable update during back propagation. (stop_gradient) | 
| mindspore.ops.strided_slice | Refer to  | 
| mindspore.ops.string_concat | Concatenate two strings. | 
| mindspore.ops.string_eq | Determine if two strings are equal. | 
| mindspore.ops.sub | Refer to  | 
| mindspore.ops.tan | Refer to  | 
| mindspore.ops.tanh | Refer to  | 
| mindspore.ops.tensor_add | Refer to  | 
| mindspore.ops.tensor_div | Refer to  | 
| mindspore.ops.tensor_exp | Refer to  | 
| mindspore.ops.tensor_expm1 | Refer to  | 
| mindspore.ops.tensor_floordiv | Refer to  | 
| mindspore.ops.tensor_ge | Refer to  | 
| mindspore.ops.tensor_gt | Refer to  | 
| mindspore.ops.tensor_le | Refer to  | 
| mindspore.ops.tensor_lt | Refer to  | 
| mindspore.ops.tensor_mod | Refer to  | 
| mindspore.ops.tensor_mul | Refer to  | 
| mindspore.ops.tensor_pow | Refer to  | 
| mindspore.ops.tensor_scatter_add | Refer to  | 
| mindspore.ops.tensor_scatter_update | Refer to  | 
| mindspore.ops.tensor_slice | Refer to  | 
| mindspore.ops.tensor_sub | Refer to  | 
| mindspore.ops.tile | Refer to  | 
| mindspore.ops.transpose | Refer to  | 
| mindspore.ops.tuple_to_array | Refer to  | 
| mindspore.ops.typeof | Get type of object. | 
| mindspore.ops.zeros_like | Refer to  | 
| API Name | Description | Supported Platforms | 
| A wrapper function to generate the gradient function for the input function. | 
 | |
| Compute the jacobian-vector-product of the given network. | 
 | |
| Returns a narrowed tensor from input tensor. | 
 | |
| Returns the selected elements, either from input \(x\) or input \(y\), depending on the condition cond. | 
 | |
| Compute the vector-jacobian-product of the given network. | 
 | 
primitive
| Creates a PrimitiveWithInfer operator that can infer the value at compile time. | |
| Primitive attributes register. | |
| Primitive is the base class of operator primitives in python. | |
| PrimitiveWithCheck is the base class of primitives in python defines functions for checking operator input arguments but used the infer method registered in c++ source codes. | |
| PrimitiveWithInfer is the base class of primitives in python and defines functions for tracking inference in python. | 
vm_impl_registry
| Gets the virtual implementation function by a primitive object or primitive name. | 
op_info_register
| Class for AiCPU operator information register. | |
| A decorator which is used to bind the registration information to the func parameter of  | |
| Class used for generating the registration information for the func parameter of  | |
| Various combinations of dtype and format of Ascend ops. | |
| A decorator which is used to register an operator. | |
| Class for TBE operator information register. |