List of ONNX Operators Supported by MindSpore Lite

View Source On Gitee

  • None of the following operators support int64 type input.

  • Currently, you can use the environment variable export KEEP_ORIGIN_DTYPE=1 to preserve the data type as int64. Consider using this option when overflow occurs with the int32 data type. However, this is currently an experimental option and will be removed in future updates.

MindSpore Lite Operator Names

Operator Functions

Corresponding ONNX Operators

Operator Specifications

Abs

Element-wise calculate the absolute value

Abs

Does not support the uint8 type. Does not support empty input tensor quantization parameters.

Activation

Activation functions

Relu, LeakyRelu, PRelu, Elu, Tanh, Sigmoid, HardSigmoid, Softplus, Gelu

-

AddFusion

Element-wise addition computation

Add, Int8Add

-

AdderFusion

Addition-based convolution operation

adder_f

-

ArgmaxFusion

Find the maximum value in a given dimension

ArgMax

Does not support the uint8 type. Does not support empty input tensor quantization parameters.

ArgminFusion

Find the minimum value in a given dimension

ArgMin

-

AvgPoolFusion

Average pooling

AveragePool, GlobalAveragePool, Int8AveragePool

-

BatchNorm

Batch normalization

BatchNormalization

-

BiasAdd

Add the bias vector to the input tensor

BiasAdd

-

BroadcastTo

Expansion of dimensions

Expand

-

Cast

Data type conversion

Cast

The following numeric type conversions are not supported: fp32 to int8, fp32 to uint32, int32 to int8, int32 to uint32, int32 to uint8, int8 to bool, and int8 to uint8.

Ceil

Round up to the nearest integer

Ceil

-

Clip

Restrict element ranges

Clip

Only supports converting clip(0, 6) to Relu6.

Concat

Concatenated Tensor

Concat

-

ConstantOfShape

Generate a tensor with the same shape as the input and fill it with the specified constant.

ConstantOfShape

-

Conv2DFusion

2D convolution

Conv, Int8Conv, ConvRelu, Int8ConvRelu

-

Conv2dTransposeFusion

Perform transposed convolution operations

ConvTranspose

-

Cos

Element-wise cosine calculation

Cos

-

CumSum

Cumulative sum of elements

CumSum

-

DepthToSpace

Rearrange deep data into spatial dimensions

DepthToSpace

Does not support the uint8 type. Does not support inputs with unknown dimensions.

DivFusion

Element-wise division

Div

Does not support division by zero.

Dropout

Randomly set some elements of the input tensor to zero.

Dropout

-

DynamicQuant

Dynamically quantize floating-point tensors to uint8 type

DynamicQuantizeLinear

-

Eltwise

Element-level operations

Sum, Max

Only supports input counts of 2.

Elu

Activation function, applying exponential correction to negative inputs

Elu, NonMaxSuppression

-

Equal

Determine whether inputs are equal

Equal

Uint8 input is not supported; int8 input does not support bool output.

Erf

Error functions

Erf

-

ExpFusion

Element-wise exponentiation

Exp

-

Flatten

Data is expanded by dimension

Flatten

Does not support the uint8 type.

Floor

Round down to the nearest integer

Floor

-

FusedBatchNorm

Standardize the input

BatchNormalization

-

Gather

Collect elements at specified index positions along a single dimension

Gather

Does not support the uint8 type. Does not support the QuantType_QUANT_NONE quantization type.

GatherD

Collect elements from the input tensor based on the index tensor.

GatherElements

-

GatherNd

Aggregate slices of the input tensor into a new tensor with dimensions specified by indices.

GatherND

-

Greater

Perform element-wise comparison between two tensors, returning a logical result (True/False) indicating whether A > B.

Greater

-

GreaterEqual

Perform element-wise comparison between two tensors, returning a logical result indicating whether A ≥ B.

GreaterOrEqual

-

InstanceNorm

Instance normalization

InstanceNormalization

-

LeakyReLU

Leaky ReLU activation function, which assigns a small slope to negative inputs.

LeakyRelu

-

Less

Perform element-wise comparison between two tensors, returning a logical result indicating whether A < B.

Less

-

Log

Element-wise calculate the logarithm

Log

Does not accept negative numbers.

LogicalAnd

Element-wise logical AND operation

And

-

LogicalNot

Element-level logical NOT operation

Not

-

LogicalOr

Element-wise logical OR operation

Or

-

LogSoftmax

Perform a softmax operation on the input vector, then take the logarithm of the softmax result.

LogSoftmax

Does not support inf input.

LRN

Local response normalization to prevent data overfitting

LRN

-

LSTM

Long-term and short-term memory network unit

LSTM

-

MatMulFusion

Perform matrix multiplication on two inputs; compute the inner product using the input tensors and a set of learned weights, and add a bias.

MatMul, Gemm

-

Maximum

Find the maximum value at the element level

Max

-

MaxPoolFusion

Maximum pooling

MaxPool, GlobalMaxPool

-

Minimum

Find the minimum value at the element level

Min

-

Mod

Return the remainder of the division operation

Mod

-

MulFusion

Element-wise multiplication

Mul

-

Neg

Element-wise find negative numbers

Neg

-

NonMaxSuppression

Non-maximum suppression

NonMaxSuppression

-

NonZero

Return the indices of all non-zero elements in the input tensor.

NonZero

-

OneHot

Convert integer index tensors to one-hot encoding representations

OneHot

-

PadFusion

Add specified padding to the input tensor, to achieve the desired size.

Pad

Does not support the int32 type.

PowFusion

Element-wise exponentiation

Pow

Only supports indices as single constants.

PReLUFusion

PRelu activation function

PRelu

-

RandomNormal

Generate a tensor whose values are randomly sampled from a normal distribution (Gaussian distribution).

RandomNormal

-

Range

Generate elements within a specified range

Range

-

Reciprocal

Return reciprocals

Reciprocal

-

ReduceFusion

Reduction operation

ReduceMean, ReduceMax, ReduceMin, ReduceProd, ReduceSum, ReduceSumSquare, ReduceL2, ReduceL1, ReduceLogSum

-

Reshape

Changing the shape of a tensor while keeping the total number of elements unchanged

Reshape, Flatten

-

Resize

Upsample or resize the input tensor

Resize, Upsample

-

ReverseSequence

Partially reverse the variable-length sequence of the input tensor.

ReverseSequence

-

Round

Round to the nearest whole number

Round

-

ScatterNd

Scatter values from the input tensor to specified positions in the output tensor based on the index.

ScatterND

-

ScatterNdUpdate

Update the value of the input data using the given value and the input index.

ScatterNdUpdate

-

Shape

Obtain the tensor shape

Shape

-

Sin

Element-wise calculation of sine

Sin

-

Size

Obtain tensor dimension size

Size

-

SliceFusion

Tensor slicing operation

Slice

-

Softmax

Normalization operation

Softmax

-

SpaceToDepth

Move the values of the height and width dimensions to the depth dimension.

SpaceToDepth

-

Splice

Connect multiple slices or ranges of the input tensor along the specified axis.

Splice

-

Split

Split the input tensor into multiple smaller output tensors along the specified axis.

Split

-

Sqrt

Element-wise take the square root

Sqrt

-

Squeeze

Remove dimension of size 1

Squeeze

-

StridedSlice

Tensor slicing

Slice, DynamicSlice

-

SubFusion

Element-wise subtraction

Sub

-

TileFusion

Flatten the given matrix

Tile

Does not support the int8 type.

TopKFusion

Return the top K elements from the input tensor.

TopK

-

Transpose

Tensor transpose

Transpose, Int8Transpose

-

Tril

Lower triangular matrix

Trilu (attribute upper=0)

-

Triu

Upper triangular matrix

Trilu (attribute upper=1)

-

Unsqueeze

Add a new dimension to the input tensor

Unsqueeze

-

Where

Element selection

NonZero, Where

-

Other operators supported by the conversion tool

-

Constant, Atan, Asin, Tan, Loop, Dropout, If, Identity, Int8GivenIntTensorFill, Int8GivenTensorFill, Int8Quantize, Int8Dequantize, LpNormalization

Operators supported by conversion tools but not requiring specific implementation are typically optimized away during conversion—either fused or replaced by other operators.