mindspore.mint
mindspore.mint provides a large number of functional, nn, optimizer interfaces. The API usages and functions are consistent with the mainstream usage in the industry for easy reference. The mint interface is currently an experimental interface and performs better than ops in graph mode of O0 and PyNative mode. Currently, the Ascend GE backend and CPU/GPU backend are not supported, and it will be gradually improved in the future.
The module import method is as follows:
from mindspore import mint
Compared with the previous version, the added, deleted and supported platforms change information of mindspore.mint operators in MindSpore, please refer to the link mindspore.mint API Interface Change .
Tensor
Creation Operations
API Name |
Description |
Supported Platforms |
Creates a sequence of numbers that begins at start and extends by increments of step up to but not including end. |
|
|
Creates a tensor with uninitialized data, whose shape, dtype and device are described by the argument size, dtype and device respectively. |
|
|
Returns an uninitialized Tensor with the same shape as the input. |
|
|
Returns a tensor with ones on the diagonal and zeros in the rest. |
|
|
Create a Tensor of the specified shape and fill it with the specified value. |
|
|
Return a Tensor of the same shape as input and filled with fill_value. |
|
|
Generate a one-dimensional tensor with steps elements, evenly distributed in the interval [start, end]. |
|
|
Creates a tensor filled with value ones. |
|
|
Creates a tensor filled with 1, with the same shape as input, and its data type is determined by the given dtype. |
|
|
Converts polar coordinates to Cartesian coordinates. |
|
|
Creates a tensor filled with 0 with shape described by size and fills it with value 0 in type of dtype. |
|
|
Creates a tensor filled with 0, with the same size as input. |
|
Indexing, Slicing, Joining, Mutating Operations
API Name |
Description |
Supported Platforms |
Connect input tensors along with the given dimension. |
|
|
Cut the input Tensor into chunks sub-tensors along the specified axis. |
|
|
Alias for |
|
|
Gather data from a tensor by indices. |
|
|
Accumulate the elements of alpha times source into the input by adding to the index in the order given in index. |
|
|
Generates a new tensor that accesses the values of input along the specified dim dimension using the indices specified in index. |
|
|
Return a new 1-D tensor that indexes the input tensor according to the boolean mask. |
|
|
Obtains a tensor of a specified length at a specified start position along a specified axis. |
|
|
Return the positions of all non-zero values. |
|
|
Permutes the dimensions of the input tensor according to input dims . |
|
|
Reshape the input tensor based on the given shape. |
|
|
Update the value in src to input according to the specified index. |
|
|
Add all elements in src to the index specified by index to input along dimension specified by dim. |
|
|
Slices the input tensor along the selected dimension at the given index. |
|
|
Splits the Tensor into chunks along the given dim. |
|
|
Return the Tensor after deleting the dimension of size 1 in the specified dim. |
|
|
Stacks a list of tensors in specified dim. |
|
|
Alias for |
|
|
Creates a new tensor by repeating the elements in the input tensor dims times. |
|
|
Interchange two axes of a tensor. |
|
|
Unbind a tensor dimension in specified axis. |
|
|
Adds an additional dimension to the input tensor at the given dimension. |
|
|
Selects elements from input or other based on condition and returns a tensor. |
|
Random Sampling
API Name |
Description |
Supported Platforms |
Samples from the Bernoulli distribution element-wise. |
|
|
Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor. |
|
|
Generates random numbers according to the standard Normal (or Gaussian) random number distribution. |
|
|
Returns a new tensor that fills numbers from the uniform distribution over an interval \([0, 1)\) based on the given shape and dtype. |
|
|
Returns a new tensor that fills numbers from the uniform distribution over an interval \([0, 1)\) based on the given dtype and shape of the input tensor. |
|
|
Returns a new tensor filled with integer numbers from the uniform distribution over an interval \([low, high)\) based on the given shape and dtype. |
|
|
Returns a new tensor filled with integer numbers from the uniform distribution over an interval \([low, high)\) based on the given dtype and shape of the input tensor. |
|
|
Returns a new tensor filled with numbers from the normal distribution over an interval \([0, 1)\) based on the given shape and dtype. |
|
|
Returns a new tensor filled with numbers from the normal distribution over an interval \([0, 1)\) based on the given dtype and shape of the input tensor. |
|
|
Generates random permutation of integers from 0 to n-1. |
|
Math Operations
Pointwise Operations
API Name |
Description |
Supported Platforms |
Compute the absolute value of a tensor element-wise. |
|
|
Computes arccosine of input tensors element-wise. |
|
|
Computes inverse hyperbolic cosine of the input element-wise. |
|
|
Scales the other value by alpha and adds it to input. |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Computes arcsine of input tensors element-wise. |
|
|
Computes inverse hyperbolic sine of the input element-wise. |
|
|
Computes the trigonometric inverse tangent of the input element-wise. |
|
|
Returns arctangent of input/other element-wise. |
|
|
Computes inverse hyperbolic tangent of the input element-wise. |
|
|
Returns bitwise and of two tensors element-wise. |
|
|
Returns bitwise or of two tensors element-wise. |
|
|
Returns bitwise xor of two tensors element-wise. |
|
|
Rounds a tensor up to the closest integer element-wise. |
|
|
Clamps tensor values between the specified minimum value and maximum value. |
|
|
Computes cosine of input element-wise. |
|
|
Computes hyperbolic cosine of input element-wise. |
|
|
Divides each element of the input by the corresponding element of the other . |
|
|
Alias for |
|
|
Compute the Gaussian error of the input tensor element-wise. |
|
|
Compute the complementary error function of input tensor element-wise. |
|
|
Compute the inverse error of input tensor element-wise. |
|
|
Compute exponential of the input tensor element-wise. |
|
|
Calculates the base-2 exponent of the tensor input element by element. |
|
|
Compute exponential of the input tensor, then minus 1, element-wise. |
|
|
Alias for |
|
|
Computes input to the power of exponent element-wise in double precision, and always returns a mindspore.float64 tensor. |
|
|
Rounds the elements of a tensor down to the closest integer element-wise. |
|
|
Divides the first input tensor by the second input tensor element-wise and rounds down to the closest integer. |
|
|
Computes the floating-point remainder of the division operation input/other. |
|
|
Calculates the fractional part of each element in the input. |
|
|
Return a new tensor containing the imaginary values of the input tensor. |
|
|
Perform a linear interpolation of two tensors input and end based on a float or tensor weight. |
|
|
Compute the natural logarithm of the input tensor element-wise. |
|
|
Returns the logarithm to the base 10 of a tensor element-wise. |
|
|
Compute the natural logarithm of (tensor + 1) element-wise. |
|
|
Returns the logarithm to the base 2 of a tensor element-wise. |
|
|
Computes the logarithm of the sum of exponentiations of the inputs. |
|
|
Logarithm of the sum of exponentiations of the inputs in base of 2. |
|
|
Compute the "logical AND" of two tensors element-wise. |
|
|
Compute the "logical NOT" of the input tensor element-wise. |
|
|
Compute the "logical OR" of two tensors element-wise. |
|
|
Compute the "logical XOR" of two tensors element-wise. |
|
|
Multiplies the other value by the input Tensor. |
|
|
Replace the NaN, positive infinity and negative infinity values in input with the specified values in nan, posinf and neginf respectively. |
|
|
Returns a tensor with negative values of the input tensor element-wise. |
|
|
Alias for |
|
|
Calculates the exponent power of each element in input. |
|
|
Return a new tensor containing the real values of the input tensor. |
|
|
Returns reciprocal of a tensor element-wise. |
|
|
Computes the remainder of input divided by other element-wise. |
|
|
Round elements of input to the nearest integer. |
|
|
Compute reciprocal of square root of input tensor element-wise. |
|
|
Computes Sigmoid of input element-wise. |
|
|
Return an element-wise indication of the sign of a number. |
|
|
Compute sine of the input tensor element-wise. |
|
|
Compute the normalized sinc of input. |
|
|
Compute hyperbolic sine of the input element-wise. |
|
|
Alias for |
|
|
Returns sqrt of a tensor element-wise. |
|
|
Return square of a tensor element-wise. |
|
|
Subtracts scaled other value from self Tensor. |
|
|
Transpose the input tensor. |
|
|
Computes tangent of input element-wise. |
|
|
Computes hyperbolic tangent of input tensor element-wise. |
|
|
Returns a tensor with the truncated integer values of the elements of the input tensor. |
|
|
Computes the first input multiplied by the logarithm of second input element-wise. |
|
Reduction Operations
API Name |
Description |
Supported Platforms |
Tests if all element in input evaluates to |
|
|
Compute the maximum value of all elements along the specified dimension. |
|
|
Compute the minimum value of all elements along the specified dimension. |
|
|
Check if |
|
|
Return the indices of the maximum values of a tensor. |
|
|
Return the indices of the minimum values of a tensor across a dimension. |
|
|
Count the number of non-zero elements in the tensor input on a given dimension dim. |
|
|
Calculate the logarithm of the sum of exponentiations of all elements along the specified dim dimension of the input tensor. |
|
|
Return the maximum value of the input tensor. |
|
|
Compute the mean of the tensor. |
|
|
Return the median(s) and indice(s) of the tensor along the specified dimension. |
|
|
Return the minimum value of the input tensor. |
|
|
Computes sum of input over a given dimension, treating NaNs as zero. |
|
|
Compute the matrix norm or vector norm of the tensor along a specified dimension. |
|
|
Multiply all elements of input. |
|
|
Calculate the standard deviation over specified dimension(s). |
|
|
Compute the standard deviation and mean of the tensor along a specified dimension. |
|
|
Calculate sum of all elements in tensor. |
|
|
Return the unique elements of input tensor. |
|
|
Returns the elements that are unique in each consecutive group of equivalent elements in the input tensor. |
|
|
Calculate the variance over the dimensions specified by dim. |
|
|
Compute the variance and the mean of the tensor along a specified dimension. |
|
Comparison Operations
API Name |
Description |
Supported Platforms |
Return whether each element of input is "close" to the corresponding element of other. |
|
|
Return the indices that sort the tensor along the specified dimension. |
|
|
Compute the equivalence of the two inputs element-wise. |
|
|
Check if two inputs are equal element-wise. |
|
|
Compute the value of \(input > other\) element-wise. |
|
|
Compute whether \(input >=other\) element-wise. |
|
|
Compute the value of \(input > other\) element-wise. |
|
|
Return a boolean tensor where two tensors are element-wise equal within a tolerance. |
|
|
Return whether each element in the input is a finite number. |
|
|
Return a boolean tensor indicating which elements are +/- infinity. |
|
|
Return whether each element in the input is a negative infinity number. |
|
|
Compute the value of \(input <= other\) element-wise. |
|
|
Compute the value of \(input < other\) element-wise. |
|
|
Compute the value of \(input <= other\) element-wise. |
|
|
Alias for |
|
|
Compute the maximum of the two input tensors element-wise. |
|
|
Compute the minimum of the two input tensors element-wise. |
|
|
Compute the non-equivalence of two inputs element-wise. |
|
|
Alias for |
|
|
Sort the elements of the input tensor along the given dimension. |
|
|
Find the k largest or smallest entries along a given dimension, and return the values and indices. |
|
BLAS and LAPACK Operations
API Name |
Description |
Supported Platforms |
Apply batch matrix multiplication to batch1 and batch2, with a reduced add step and add input to the result. |
|
|
Multiply matrix mat1 and matrix mat2. |
|
|
Performs a matrix-vector product of mat and vec, and add the input vector input to the final result. |
|
|
Perform a batch matrix-matrix product of matrices in batch1 and batch2 , input is added to the final result. |
|
|
Perform a batch matrix-matrix multiplication of two three-dimensional tensors. |
|
|
Compute the dot product of two 1D tensors. |
|
|
Compute the inverse of the input matrix. |
|
|
Return the matrix product of two tensors. |
|
|
Return the matrix product of two arrays. |
|
|
Multiply matrix input and vector vec. |
|
|
Compute outer product of two tensors. |
|
|
Solve a system of equations with a square upper or lower triangular invertible matrix A and multiple right-hand sides b. |
|
Other Operations
API Name |
Description |
Supported Platforms |
Count the occurrences of each value in the input. |
|
|
Broadcasts input tensor to a given shape. |
|
|
Compute p-norm distance between each pair of row vectors of two input tensors. |
|
|
Returns a copy of the input tensor. |
|
|
Compute the cross product of two input tensors along the specified dimension. |
|
|
Return the cumulative maximum values and their indices along the given dimension of the tensor. |
|
|
Return the cumulative minimum values and their indices along the given dimension of the tensor. |
|
|
Return the cumulative product along the given dimension of the tensor. |
|
|
Return the cumulative sum along the given dimension of the tensor. |
|
|
If input is a vector (1-D tensor), then returns a 2-D square tensor with the elements of input as the diagonal. |
|
|
Computes the n-th forward difference along the given dimension. |
|
|
According to the Einstein summation Convention (Einsum), the product of the input tensor elements is summed along the specified dimension. |
|
|
Flatten a tensor along dimensions from start_dim to end_dim. |
|
|
Reverses elements in a tensor along the given dimensions. |
|
|
Compute the histogram of a tensor. |
|
|
Generate coordinate matrices from given coordinate tensors. |
|
|
Expand the multidimensional Tensor into 1D along the 0 axis direction. |
|
|
Repeat elements of a tensor along an axis, like |
|
|
Roll the elements of a tensor along a dimension. |
|
|
Return the position indices where the elements can be inserted into the input tensor to maintain the increasing order of the input tensor. |
|
|
Return the sum of the elements along the diagonal of the input tensor. |
|
|
Zero the input tensor above the diagonal specified. |
|
|
Zero the input tensor below the diagonal specified. |
|
mindspore.mint.nn
Convolution Layers
API Name |
Description |
Supported Platforms |
1D convolution layer. |
|
|
2D convolution layer. |
|
|
3D convolution layer. |
|
|
Applies a 2D transposed convolution operator over an input image composed of several input planes. |
|
|
Combines an array of sliding local blocks into a large containing tensor. |
|
|
Extracts sliding local blocks from a batched input tensor into a column matrix. |
|
Pooling Layers
API Name |
Description |
Supported Platforms |
Apply a 1-D adaptive average pooling over an input signal composed of several input planes. |
|
|
Apply a 2-D adaptive average pooling over an input signal composed of several input planes. |
|
|
Apply a 3-D adaptive average pooling to an input signal composed of multiple input planes. |
|
|
Apply a 1-D adaptive max pooling over an input signal composed of several input planes. |
|
|
This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes. |
|
|
Apply a 2-D average pooling over an input tensor which can be regarded as a composition of 2-D input planes. |
|
|
Apply a 3-D average pooling over an input tensor which can be regarded as a composition of 3-D input planes. |
|
|
Compute the inverse of MaxPool2d. |
|
Padding Layers
API Name |
Description |
Supported Platforms |
Apply a 1-D constant padding to the last dimension of the input tensor using padding and value. |
|
|
Pad the last 2-D dimensions of input tensor using padding and value. |
|
|
Pad the last 3-D dimensions of input tensor using padding and value. |
|
|
Pad the last dimension of input tensor using the reflection of the input boundary. |
|
|
Pad the last 2-D dimensions of input tensor using the reflection of the input boundary. |
|
|
Pad the last 3-D dimensions of input tensor using the reflection of the input boundary. |
|
|
Pad the last dimension of input tensor using the replication of the input boundary. |
|
|
Pad the last 2-D dimensions of input tensor using the replication of the input boundary. |
|
|
Pad the last 3-D dimensions of input tensor using the replication of the input boundary. |
|
|
Pad the last dimension of input tensor with 0 using padding. |
|
|
Pad the last 2 dimensions of input tensor with 0 using padding. |
|
|
Pad the last 3 dimensions of input tensor with 0 using padding. |
|
Non-linear Activations (weighted sum, nonlinearity)
API Name |
Description |
Supported Platforms |
Exponential Linear Unit activation function |
|
|
Activation function GELU (Gaussian Error Linear Unit). |
|
|
Computes GLU (Gated Linear Unit activation function) of the input tensor. |
|
|
Applies Hard Shrink activation function element-wise. |
|
|
Applies the Hard Sigmoid activation function element-wise. |
|
|
Applies the Hard Swish activation function element-wise. |
|
|
Applies the LogSigmoid activation function element-wise. |
|
|
Applies the Log Softmax function to the input tensor on the specified axis. |
|
|
Compute MISH (A Self Regularized Non-Monotonic Neural Activation Function) activation function element-wise. |
|
|
Apply the element-wise PReLU function. |
|
|
Apply ReLU (rectified linear unit activation function) element-wise. |
|
|
Apply ReLU6 (rectified linear unit capped at 6) element-wise. |
|
|
Apply SELU (scaled exponential linear unit) element-wise. |
|
|
Apply sigmoid activation function element-wise. |
|
|
Calculate the SiLU activation function element-wise. |
|
|
Apply the Softmax function to an n-dimensional input tensor. |
|
|
Apply the Softshrink function element-wise. |
|
|
Apply the Tanh function element-wise. |
|
|
Compute the Threshold activation function element-wise. |
|
Normalization Layers
API Name |
Description |
Supported Platforms |
Applies Batch Normalization over a 2D or 3D input as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . |
|
|
Applies Batch Normalization over a 4D input as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . |
|
|
Applies Batch Normalization over a 5D input as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . |
|
|
Group normalization of mini-batch inputs. |
|
|
Layer normalization of the input mini-batch. |
|
|
Sync Batch Normalization layer over a N-dimension input. |
|
Linear Layers
API Name |
Description |
Supported Platforms |
A placeholder identity operator that returns the same as input. |
|
|
The linear connected layer. |
|
Dropout Layers
API Name |
Description |
Supported Platforms |
Dropout layer for the input. |
|
|
During training, randomly zeros some channels of the input tensor with probability p (For a 4-D tensor with a shape of \((N, C, H, W)\), the channel feature map refers to a 2-D feature map with the shape of \((H, W)\)). |
|
Sparse Layers
API Name |
Description |
Supported Platforms |
The value in input is used as the index, and the corresponding embedding vector is queried from weight . |
|
Loss Functions
API Name |
Description |
Supported Platforms |
Compute the binary cross entropy between the true labels and predicted labels. |
|
|
Add sigmoid activation function to input as logits, and use these logits to compute binary cross entropy between the logits and the target. |
|
|
CosineEmbeddingLoss creates a criterion to measure the similarity between two tensors using cosine distance. |
|
|
The cross entropy loss between input and target. |
|
|
Computes the Kullback-Leibler divergence between the input and the target. |
|
|
L1Loss is used to calculate the mean absolute error between the predicted value and the target value. |
|
|
Calculates the mean squared error between the predicted value and the label value. |
|
|
Gets the negative log likelihood loss between inputs and target. |
|
|
Computes smooth L1 loss, a robust L1 loss. |
|
Vision Layers
API Name |
Description |
Supported Platforms |
Rearrange elements in a tensor according to an upscaling factor. |
|
|
For details, please refer to |
|
mindspore.mint.nn.functional
Convolution functions
API Name |
Description |
Supported Platforms |
Applies a 1D convolution over an input tensor. |
|
|
Applies a 2D convolution over an input tensor. |
|
|
Applies a 3D convolution over an input tensor. |
|
|
Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called deconvolution (although it is not an actual deconvolution). |
|
|
Combines an array of sliding local blocks into a large containing tensor. |
|
|
Extracts sliding local blocks from a batched input tensor. |
|
Pooling functions
API Name |
Description |
Supported Platforms |
Performs 1D adaptive average pooling on a multi-plane input signal. |
|
|
Performs 2D adaptive average pooling on a multi-plane input signal. |
|
|
Performs 3D adaptive average pooling on a multi-plane input signal. |
|
|
Performs 1D adaptive max pooling on a multi-plane input signal. |
|
|
This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes. |
|
|
Applies a 1D average pooling over an input Tensor which can be regarded as a composition of 1D input planes. |
|
|
Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. |
|
|
Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes. |
|
|
Performs a 2D max pooling on the input Tensor. |
|
|
Computes the inverse of max_pool2d. |
|
Non-linear activation functions
API Name |
Description |
Supported Platforms |
Batch Normalization for input data and updated parameters. |
|
|
Exponential Linear Unit activation function |
|
|
Exponential Linear Unit activation function |
|
|
Gaussian Error Linear Units activation function. |
|
|
Computes GLU (Gated Linear Unit activation function) of the input tensor. |
|
|
Group Normalization over a mini-batch of inputs. |
|
|
HardShrink activation function. |
|
|
HardSigmoid activation function. |
|
|
HardSwish activation function. |
|
|
Applies the Layer Normalization to the mini-batch input. |
|
|
leaky_relu activation function. |
|
|
Applies the Log Softmax function to the input tensor on the specified axis. |
|
|
Applies logsigmoid activation element-wise. |
|
|
Computes MISH (A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise. |
|
|
Perform normalization of inputs over specified dimension |
|
|
Parametric Rectified Linear Unit activation function. |
|
|
Computes ReLU (Rectified Linear Unit activation function) of input tensors element-wise. |
|
|
ReLuComputes ReLU (Rectified Linear Unit activation function) inplace of input tensors element-wise. |
|
|
Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise. |
|
|
Activation function SELU (Scaled exponential Linear Unit). |
|
|
Computes Sigmoid of input element-wise. |
|
|
Computes Sigmoid Linear Unit of input element-wise. |
|
|
Applies the Softmax operation to the input tensor on the specified axis. |
|
|
Applies softplus function to input element-wise. |
|
|
Soft Shrink activation function. |
|
|
Computes hyperbolic tangent of input tensor element-wise. |
|
|
Compute the Threshold activation function element-wise. |
|
|
Update the input tensor in-place by computing the Threshold activation function element-wise. |
|
Linear functions
API Name |
Description |
Supported Platforms |
Applies the dense connected operation to the input. |
|
Dropout functions
API Name |
Description |
Supported Platforms |
During training, randomly zeroes some of the elements of the input tensor with probability p from a Bernoulli distribution. |
|
|
During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution (For a 4-dimensional tensor with a shape of \((N, C, H, W)\), the channel feature map refers to a 2-dimensional feature map with the shape of \((H, W)\)). |
|
Sparse functions
API Name |
Description |
Supported Platforms |
Retrieve the word embeddings in weight using indices specified in input. |
|
|
Computes a one-hot tensor. |
|
Loss Functions
API Name |
Description |
Supported Platforms |
Computes the binary cross entropy (Measure the difference information between two probability distributions) between predictive value input and target value target. |
|
|
|
Adds sigmoid activation function to input as logits, and uses this logits to compute binary cross entropy between the logits and the target. |
|
Creates a criterion to measure the similarity between two tensors using cosine distance. |
|
|
The cross entropy loss between input and target. |
|
|
Computes the Kullback-Leibler divergence between the input and the target. |
|
|
Calculate the mean absolute error between the input value and the target value. |
|
|
Calculates the mean squared error between the predicted value and the label value. |
|
|
Gets the negative log likelihood loss between input and target. |
|
|
Computes smooth L1 loss, a robust L1 loss. |
|
Vision functions
API Name |
Description |
Supported Platforms |
Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. |
|
|
Samples the input Tensor to the given size or scale_factor by using one of the interpolate algorithms. |
|
|
Pads the input tensor according to the pad. |
|
|
Rearrange elements in a tensor according to an upscaling factor. |
|
mindspore.mint.optim
Algorithms
API Name |
Description |
Supported Platforms |
Implements Adaptive Moment Estimation (Adam) algorithm. |
|
|
Implements Adam Weight Decay algorithm. |
|
|
Implements Adam Weight Decay algorithm. |
|
|
Stochastic Gradient Descent optimizer. |
|
mindspore.mint.linalg
Matrix Properties
API Name |
Description |
Supported Platforms |
Returns the matrix norm of a given tensor on the specified dimensions. |
|
|
Returns the matrix norm or vector norm of a given tensor. |
|
|
Returns the vector norm of the given tensor on the specified dimensions. |
|
Decompositions
API Name |
Description |
Supported Platforms |
Orthogonal decomposition of the input \(A = QR\). |
|
Inverses
API Name |
Description |
Supported Platforms |
Compute the inverse of the input matrix. |
|
mindspore.mint.special
Pointwise Operations
API Name |
Description |
Supported Platforms |
Compute the complementary error function of input tensor element-wise. |
|
|
Calculates the base-2 exponent of the tensor input element by element. |
|
|
Compute exponential of the input tensor, then minus 1, element-wise. |
|
|
Compute the natural logarithm of (tensor + 1) element-wise. |
|
|
Applies the Log Softmax function to the input tensor on the specified axis. |
|
|
Returns half to even of a tensor element-wise. |
|
|
Compute the normalized sinc of input. |
|
mindspore.mint.distributed
API Name |
Description |
Supported Platforms |
Gathers tensors from the specified communication group and returns the tensor list which is all gathered. |
|
|
Gathers tensors from the specified communication group and returns the tensor which is all gathered. |
|
|
Gathers and concatenates tensors across devices with uneven first dimensions. |
|
|
Aggregates Python objects in a specified communication group. |
|
|
Reduce tensors across all devices in such a way that all deviceswill get the same final result, returns the tensor which is all reduced. |
|
|
scatter and gather list of tensor to/from all rank according to input/output tensor list. |
|
|
scatter and gather input with split size to/from all rank, and return result in a single tensor. |
|
|
Synchronizes all processes in the specified group. |
|
|
Batch send and receive tensors asynchronously. |
|
|
Broadcasts the tensor to the whole group. |
|
|
Broadcasts the entire group of input Python objects. |
|
|
Destroy the specified collective communication group. |
|
|
Gathers tensors from the specified communication group. |
|
|
Gathers python objects from the whole group in a single process. |
|
|
Get the backend of communication process groups. |
|
|
Return the global (world) rank ID that corresponds to the given rank in the specified communication group. |
|
|
Get the rank ID in the specified communication group that corresponds to the given global (world) rank. |
|
|
Gets the ranks of the specific group and returns the process ranks in the communication group as a list. |
|
|
Get the rank ID for the current device in the specified collective communication group. |
|
|
Get the rank size of the specified collective communication group. |
|
|
Initialize the collective communication library and create a default collective communication group. |
|
|
Receive the tensor from the specified source rank asynchronously. |
|
|
Send the tensor to the specified destination rank asynchronously. |
|
|
Check whether the distributed module is available. |
|
|
Check whether the default process group has been initialized. |
|
|
Create a new distributed group. |
|
|
Object for batch_isend_irecv input, to store information of |
|
|
Receive the tensor from the specified source rank. |
|
|
Receive picklable objects from the specified source rank synchronously. |
|
|
Reduces tensors across the processes in the specified communication group, sends the result to the target dst(global rank), and returns the tensor which is sent to the target process. |
|
|
Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered. |
|
|
Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered. |
|
|
Reduce tensors from the specified communication group and scatter to the output tensor according to input_split_sizes. |
|
|
Scatter tensor evently across the processes in the specified communication group. |
|
|
Scatters picklable objects in scatter_object_input_list to the whole group. |
|
|
Send the tensor to the specified destination rank. |
|
|
Send picklable objects to the specified destination rank synchronously. |
|
|
A TCP-based distributed key-value store implementation. |
|