mindspore.nn

Neural Networks Cells.

Pre-defined building blocks or computing units to construct neural networks.

Compared with the previous version, the added, deleted and supported platforms change information of mindspore.nn operators in MindSpore, please refer to the link https://gitee.com/mindspore/docs/blob/r1.6/resource/api_updates/nn_api_updates.md.

Cell

API Name

Description

Supported Platforms

mindspore.nn.Cell

Base class for all neural networks.

Ascend GPU CPU

mindspore.nn.GraphCell

Base class for running the graph loaded from a MindIR.

Ascend GPU CPU

Containers

API Name

Description

Supported Platforms

mindspore.nn.CellList

Holds Cells in a list.

Ascend GPU CPU

mindspore.nn.SequentialCell

Sequential cell container.

Ascend GPU CPU

Convolution Layers

API Name

Description

Supported Platforms

mindspore.nn.Conv1d

1D convolution layer.

Ascend GPU CPU

mindspore.nn.Conv1dTranspose

1D transposed convolution layer.

Ascend GPU CPU

mindspore.nn.Conv2d

2D convolution layer.

Ascend GPU CPU

mindspore.nn.Conv2dTranspose

2D transposed convolution layer.

Ascend GPU CPU

mindspore.nn.Conv3d

3D convolution layer.

Ascend GPU

mindspore.nn.Conv3dTranspose

Compute a 3D transposed convolution, which is also known as a deconvolution (although it is not an actual deconvolution).

Ascend GPU

Gradient

API Name

Description

Supported Platforms

mindspore.nn.Jvp

Compute the jacobian-vector-product of the given fn.

Ascend GPU CPU

mindspore.nn.Vjp

Computes the dot product between a vector v and the Jacobian of the given fn at the point given by the inputs.

Ascend GPU CPU

Recurrent Layers

API Name

Description

Supported Platforms

mindspore.nn.GRUCell

A GRU(Gated Recurrent Unit) cell.

Ascend GPU CPU

mindspore.nn.GRU

Stacked GRU (Gated Recurrent Unit) layers.

Ascend GPU CPU

mindspore.nn.LSTMCell

A LSTM (Long Short-Term Memory) cell.

Ascend GPU CPU

mindspore.nn.LSTM

Stacked LSTM (Long Short-Term Memory) layers.

Ascend GPU CPU

mindspore.nn.RNNCell

An Elman RNN cell with tanh or ReLU non-linearity.

Ascend GPU CPU

mindspore.nn.RNN

Stacked Elman RNN layers.

Ascend GPU CPU

Sparse Layers

API Name

Description

Supported Platforms

mindspore.nn.Embedding

A simple lookup table that stores embeddings of a fixed dictionary and size.

Ascend GPU CPU

mindspore.nn.EmbeddingLookup

Returns a slice of the input tensor based on the specified indices.

Ascend GPU CPU

mindspore.nn.MultiFieldEmbeddingLookup

Returns a slice of input tensor based on the specified indices and the field ids.

Ascend GPU

mindspore.nn.SparseToDense

Converts a sparse tensor into dense.

CPU

mindspore.nn.SparseTensorDenseMatmul

Multiplies sparse matrix a and dense matrix b.

CPU

Non-linear Activations

API Name

Description

Supported Platforms

mindspore.nn.CELU

Continuously differentiable exponential linear units activation function.

Ascend

mindspore.nn.ELU

Exponential Linear Unit activation function.

Ascend GPU CPU

mindspore.nn.FastGelu

Fast Gaussian error linear unit activation function.

Ascend

mindspore.nn.GELU

Gaussian error linear unit activation function.

Ascend GPU CPU

mindspore.nn.get_activation

Gets the activation function.

Ascend GPU CPU

mindspore.nn.HShrink

Applies the hard shrinkage function element-wise, each element complies the follow function:

Ascend

mindspore.nn.HSigmoid

Hard sigmoid activation function.

Ascend GPU CPU

mindspore.nn.HSwish

Hard swish activation function.

GPU CPU

mindspore.nn.LeakyReLU

Leaky ReLU activation function.

Ascend GPU CPU

mindspore.nn.LogSigmoid

Logsigmoid activation function.

Ascend GPU

mindspore.nn.LogSoftmax

LogSoftmax activation function.

Ascend GPU CPU

mindspore.nn.PReLU

PReLU activation function.

Ascend GPU

mindspore.nn.ReLU

Rectified Linear Unit activation function.

Ascend GPU CPU

mindspore.nn.ReLU6

Compute ReLU6 activation function.

Ascend GPU CPU

mindspore.nn.Sigmoid

Sigmoid activation function.

Ascend GPU CPU

mindspore.nn.Softmax

Softmax activation function.

Ascend GPU CPU

mindspore.nn.SoftShrink

Applies the SoftShrink function element-wise.

Ascend

mindspore.nn.Tanh

Tanh activation function.

Ascend GPU CPU

Utilities

API Name

Description

Supported Platforms

mindspore.nn.ClipByNorm

Clips tensor values to a maximum \(L_2\)-norm.

Ascend GPU CPU

mindspore.nn.Dense

The dense connected layer.

Ascend GPU CPU

mindspore.nn.Dropout

Dropout layer for the input.

Ascend GPU CPU

mindspore.nn.Flatten

Flatten layer for the input.

Ascend GPU CPU

mindspore.nn.L1Regularizer

Applies l1 regularization to weights.

Ascend GPU CPU

mindspore.nn.Norm

Computes the norm of vectors, currently including Euclidean norm, i.e., \(L_2\)-norm.

Ascend GPU CPU

mindspore.nn.OneHot

Returns a one-hot tensor.

Ascend GPU CPU

mindspore.nn.Pad

Pads the input tensor according to the paddings and mode.

Ascend GPU CPU

mindspore.nn.Range

Creates a sequence of numbers in range [start, limit) with step size delta.

Ascend GPU CPU

mindspore.nn.ResizeBilinear

Samples the input tensor to the given size or scale_factor by using bilinear interpolate.

Ascend CPU GPU

mindspore.nn.Roll

Rolls the elements of a tensor along an axis.

Ascend

mindspore.nn.Tril

Returns a tensor with elements above the kth diagonal zeroed.

Ascend GPU CPU

mindspore.nn.Triu

Returns a tensor with elements below the kth diagonal zeroed.

Ascend GPU CPU

mindspore.nn.Unfold

Extracts patches from images.

Ascend GPU

Images Functions

API Name

Description

Supported Platforms

mindspore.nn.CentralCrop

Crops the central region of the images with the central_fraction.

Ascend GPU CPU

mindspore.nn.ImageGradients

Returns two tensors, the first is along the height dimension and the second is along the width dimension.

Ascend GPU CPU

mindspore.nn.MSSSIM

Returns MS-SSIM index between two images.

Ascend GPU

mindspore.nn.PSNR

Returns Peak Signal-to-Noise Ratio of two image batches.

Ascend GPU CPU

mindspore.nn.SSIM

Returns SSIM index between two images.

Ascend GPU CPU

Normalization Layers

API Name

Description

Supported Platforms

mindspore.nn.BatchNorm1d

Batch Normalization layer over a 2D input.

Ascend GPU CPU

mindspore.nn.BatchNorm2d

Batch Normalization layer over a 4D input.

Ascend GPU CPU

mindspore.nn.BatchNorm3d

Batch Normalization layer over a 5D input.

Ascend GPU CPU

mindspore.nn.GlobalBatchNorm

The GlobalBatchNorm interface is deprecated, please use the mindspore.nn.SyncBatchNorm instead.

deprecated

mindspore.nn.GroupNorm

Group Normalization over a mini-batch of inputs.

Ascend GPU CPU

mindspore.nn.InstanceNorm2d

Instance Normalization layer over a 4D input.

GPU

mindspore.nn.LayerNorm

Applies Layer Normalization over a mini-batch of inputs.

Ascend GPU CPU

mindspore.nn.MatrixDiag

Returns a batched diagonal tensor with a given batched diagonal values.

Ascend

mindspore.nn.MatrixDiagPart

Returns the batched diagonal part of a batched tensor.

Ascend

mindspore.nn.MatrixSetDiag

Modifies the batched diagonal part of a batched tensor.

Ascend

mindspore.nn.SyncBatchNorm

Sync Batch Normalization layer over a N-dimension input.

Ascend

Pooling layers

API Name

Description

Supported Platforms

mindspore.nn.AvgPool1d

1D average pooling for temporal data.

Ascend GPU CPU

mindspore.nn.AvgPool2d

2D average pooling for temporal data.

Ascend GPU CPU

mindspore.nn.MaxPool1d

1D max pooling operation for temporal data.

Ascend GPU CPU

mindspore.nn.MaxPool2d

2D max pooling operation for temporal data.

Ascend GPU CPU

Quantized Functions

API Name

Description

Supported Platforms

mindspore.nn.ActQuant

Quantization aware training activation function.

Ascend GPU

mindspore.nn.Conv2dBnAct

A combination of convolution, Batchnorm, and activation layer.

Ascend GPU CPU

mindspore.nn.Conv2dBnFoldQuant

2D convolution with Batch Normalization operation folded construct.

Ascend GPU

mindspore.nn.Conv2dBnFoldQuantOneConv

2D convolution which use the convolution layer statistics once to calculate Batch Normalization operation folded construct.

Ascend GPU

mindspore.nn.Conv2dBnWithoutFoldQuant

2D convolution and batchnorm without fold with fake quantized construct.

Ascend GPU

mindspore.nn.Conv2dQuant

2D convolution with fake quantized operation layer.

Ascend GPU

mindspore.nn.DenseBnAct

A combination of Dense, Batchnorm, and the activation layer.

Ascend GPU CPU

mindspore.nn.DenseQuant

The fully connected layer with fake quantized operation.

Ascend GPU

mindspore.nn.FakeQuantWithMinMaxObserver

Quantization aware operation which provides the fake quantization observer function on data with min and max.

Ascend GPU

mindspore.nn.MulQuant

Adds fake quantized operation after Mul operation.

Ascend GPU

mindspore.nn.TensorAddQuant

Adds fake quantized operation after TensorAdd operation.

Ascend GPU

Loss Functions

API Name

Description

Supported Platforms

mindspore.nn.BCELoss

BCELoss creates a criterion to measure the binary cross entropy between the true labels and predicted labels.

Ascend GPU CPU

mindspore.nn.BCEWithLogitsLoss

Adds sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the labels.

Ascend GPU

mindspore.nn.CosineEmbeddingLoss

CosineEmbeddingLoss creates a criterion to measure the similarity between two tensors using cosine distance.

Ascend GPU CPU

mindspore.nn.DiceLoss

The Dice coefficient is a set similarity loss.

Ascend GPU CPU

mindspore.nn.FocalLoss

The loss function proposed by Kaiming team in their paper Focal Loss for Dense Object Detection improves the effect of image object detection.

Ascend

mindspore.nn.L1Loss

L1Loss creates a criterion to measure the mean absolute error (MAE) between \(x\) and \(y\) element-wise, where \(x\) is the input Tensor and \(y\) is the labels Tensor.

Ascend GPU CPU

mindspore.nn.LossBase

Base class for other losses.

Ascend GPU CPU

mindspore.nn.MSELoss

MSELoss creates a criterion to measure the mean squared error (squared L2-norm) between \(x\) and \(y\) element-wise, where \(x\) is the input and \(y\) is the labels.

Ascend GPU CPU

mindspore.nn.MultiClassDiceLoss

When there are multiple classifications, label is transformed into multiple binary classifications by one hot.

Ascend GPU CPU

mindspore.nn.RMSELoss

RMSELoss creates a criterion to measure the root mean square error between \(x\) and \(y\) element-wise, where \(x\) is the input and \(y\) is the labels.

Ascend GPU CPU

mindspore.nn.SampledSoftmaxLoss

Computes the sampled softmax training loss.

GPU

mindspore.nn.SmoothL1Loss

A loss class for learning region proposals.

Ascend GPU CPU

mindspore.nn.SoftMarginLoss

A loss class for two-class classification problems.

Ascend

mindspore.nn.SoftmaxCrossEntropyWithLogits

Computes softmax cross entropy between logits and labels.

Ascend GPU CPU

Optimizer Functions

API Name

Description

Supported Platforms

mindspore.nn.Adagrad

Implements the Adagrad algorithm with ApplyAdagrad Operator.

Ascend CPU GPU

mindspore.nn.Adam

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm.

Ascend GPU CPU

mindspore.nn.AdamOffload

This optimizer will offload Adam optimizer to host CPU and keep parameters being updated on the device, to minimize the memory cost.

Ascend GPU CPU

mindspore.nn.AdamWeightDecay

Implements the Adam algorithm with weight decay.

Ascend GPU CPU

mindspore.nn.ASGD

Implements Average Stochastic Gradient Descent.

Ascend GPU CPU

mindspore.nn.FTRL

Implements the FTRL algorithm with ApplyFtrl Operator.

Ascend GPU

mindspore.nn.Lamb

An optimizer that implements the Lamb(Layer-wise Adaptive Moments optimizer for Batching training) algorithm.

Ascend GPU

mindspore.nn.LARS

Implements the LARS algorithm with LARSUpdate Operator.

Ascend

mindspore.nn.LazyAdam

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm.

Ascend GPU CPU

mindspore.nn.Momentum

An optimizer that implements the Momentum algorithm.

Ascend GPU CPU

mindspore.nn.Optimizer

Base class for updating parameters.

Ascend GPU CPU

mindspore.nn.ProximalAdagrad

Implements the ProximalAdagrad algorithm with ApplyProximalAdagrad Operator.

Ascend

mindspore.nn.RMSProp

Implements Root Mean Squared Propagation (RMSProp) algorithm.

Ascend GPU CPU

mindspore.nn.Rprop

Implements Resilient backpropagation.

Ascend GPU CPU

mindspore.nn.SGD

Implements stochastic gradient descent.

Ascend GPU CPU

mindspore.nn.thor

Updates gradients by second-order algorithm--THOR.

Ascend GPU

Wrapper Functions

API Name

Description

Supported Platforms

mindspore.nn.DistributedGradReducer

A distributed optimizer.

Ascend GPU

mindspore.nn.DynamicLossScaleUpdateCell

Dynamic Loss scale update cell.

Ascend GPU

mindspore.nn.FixedLossScaleUpdateCell

Update cell with fixed loss scaling value.

Ascend GPU

mindspore.nn.ForwardValueAndGrad

Encapsulate training network.

Ascend GPU CPU

mindspore.nn.GetNextSingleOp

Cell to run for getting the next operation.

Ascend GPU

mindspore.nn.MicroBatchInterleaved

Wrap the network with Batch Size.

Ascend GPU

mindspore.nn.ParameterUpdate

Cell that updates parameter.

Ascend GPU CPU

mindspore.nn.PipelineCell

Wrap the network with Micro Batch.

Ascend GPU

mindspore.nn.TimeDistributed

The time distributed layer.

Ascend GPU CPU

mindspore.nn.TrainOneStepCell

Network training package class.

Ascend GPU CPU

mindspore.nn.TrainOneStepWithLossScaleCell

Network training with loss scaling.

Ascend GPU

mindspore.nn.WithEvalCell

Wraps the forward network with the loss function.

Ascend GPU CPU

mindspore.nn.WithGradCell

Cell that returns the gradients.

Ascend GPU CPU

mindspore.nn.WithLossCell

Cell with loss function.

Ascend GPU CPU

Math Functions

API Name

Description

Supported Platforms

mindspore.nn.MatMul

The nn.MatMul interface is deprecated, please use the mindspore.ops.matmul instead.

deprecated

mindspore.nn.Moments

Calculates the mean and variance of x.

Ascend GPU CPU

mindspore.nn.ReduceLogSumExp

Reduces a dimension of a tensor by calculating exponential for all elements in the dimension, then calculate logarithm of the sum.

Ascend GPU CPU

Metrics

API Name

Description

Supported Platforms

mindspore.nn.Accuracy

Calculates the accuracy for classification and multilabel data.

Ascend GPU CPU

mindspore.nn.auc

Computes the AUC(Area Under the Curve) using the trapezoidal rule.

Ascend GPU CPU

mindspore.nn.BleuScore

Calculates the BLEU score.

Ascend GPU CPU

mindspore.nn.ConfusionMatrix

Computes the confusion matrix, which is commonly used to evaluate the performance of classification models, including binary classification and multiple classification.

Ascend GPU CPU

mindspore.nn.ConfusionMatrixMetric

Computes metrics related to confusion matrix.

Ascend GPU CPU

mindspore.nn.CosineSimilarity

Computes representation similarity.

Ascend GPU CPU

mindspore.nn.Dice

The Dice coefficient is a set similarity metric.

Ascend GPU CPU

mindspore.nn.F1

Calculates the F1 score.

To Be Developed

mindspore.nn.Fbeta

Calculates the fbeta score.

Ascend GPU CPU

mindspore.nn.HausdorffDistance

Calculates the Hausdorff distance.

Ascend GPU CPU

mindspore.nn.get_metric_fn

Gets the metric method based on the input name.

To Be Developed

mindspore.nn.Loss

Calculates the average of the loss.

Ascend GPU CPU

mindspore.nn.MAE

Calculates the mean absolute error(MAE).

Ascend GPU CPU

mindspore.nn.MeanSurfaceDistance

Computes the Average Surface Distance from y_pred to y under the default setting.

Ascend GPU CPU

mindspore.nn.Metric

Base class of metric.

To Be Developed

mindspore.nn.MSE

Measures the mean squared error(MSE).

Ascend GPU CPU

mindspore.nn.names

Gets all names of the metric methods.

To Be Developed

mindspore.nn.OcclusionSensitivity

Calculates the occlusion sensitivity of the model for a given image.

Ascend GPU CPU

mindspore.nn.Perplexity

Computes perplexity.

Ascend GPU CPU

mindspore.nn.Precision

Calculates precision for classification and multilabel data.

Ascend GPU CPU

mindspore.nn.Recall

Calculates recall for classification and multilabel data.

Ascend GPU CPU

mindspore.nn.ROC

Calculates the ROC curve.

Ascend GPU CPU

mindspore.nn.RootMeanSquareDistance

Computes the Root Mean Square Surface Distance from y_pred to y under the default setting.

Ascend GPU CPU

mindspore.nn.rearrange_inputs

This decorator is used to rearrange the inputs according to its indexes attribute of the class.

To Be Developed

mindspore.nn.Top1CategoricalAccuracy

Calculates the top-1 categorical accuracy.

Ascend GPU CPU

mindspore.nn.Top5CategoricalAccuracy

Calculates the top-5 categorical accuracy.

Ascend GPU CPU

mindspore.nn.TopKCategoricalAccuracy

Calculates the top-k categorical accuracy.

Ascend GPU CPU

Dynamic Learning Rate

LearningRateSchedule

The dynamic learning rates in this module are all subclasses of LearningRateSchedule. Pass the instance of LearningRateSchedule to an optimizer. During the training process, the optimizer calls the instance taking current step as input to get the current learning rate.

import mindspore.nn as nn

min_lr = 0.01
max_lr = 0.1
decay_steps = 4
cosine_decay_lr = nn.CosineDecayLR(min_lr, max_lr, decay_steps)

net = Net()
optim = nn.Momentum(net.trainable_params(), learning_rate=cosine_decay_lr, momentum=0.9)

API Name

Description

Supported Platforms

mindspore.nn.CosineDecayLR

Calculates learning rate based on cosine decay function.

Ascend GPU

mindspore.nn.ExponentialDecayLR

Calculates learning rate based on exponential decay function.

Ascend GPU CPU

mindspore.nn.InverseDecayLR

Calculates learning rate base on inverse-time decay function.

Ascend GPU CPU

mindspore.nn.NaturalExpDecayLR

Calculates learning rate base on natural exponential decay function.

Ascend GPU CPU

mindspore.nn.PolynomialDecayLR

Calculates learning rate base on polynomial decay function.

Ascend GPU

mindspore.nn.WarmUpLR

Gets learning rate warming up.

Ascend GPU

Dynamic LR

The dynamic learning rates in this module are all functions. Call the function and pass the result to an optimizer. During the training process, the optimizer takes result[current step] as current learning rate.

import mindspore.nn as nn

min_lr = 0.01
max_lr = 0.1
total_step = 6
step_per_epoch = 1
decay_epoch = 4

lr= nn.cosine_decay_lr(min_lr, max_lr, total_step, step_per_epoch, decay_epoch)

net = Net()
optim = nn.Momentum(net.trainable_params(), learning_rate=lr, momentum=0.9)

mindspore.nn.cosine_decay_lr

Calculates learning rate base on cosine decay function.

mindspore.nn.exponential_decay_lr

Calculates learning rate base on exponential decay function.

mindspore.nn.inverse_decay_lr

Calculates learning rate base on inverse-time decay function.

mindspore.nn.natural_exp_decay_lr

Calculates learning rate base on natural exponential decay function.

mindspore.nn.piecewise_constant_lr

Get piecewise constant learning rate.

mindspore.nn.polynomial_decay_lr

Calculates learning rate base on polynomial decay function.

mindspore.nn.warmup_lr

Gets learning rate warming up.