Acronym and Abbreviation



Name of Huawei Ascend series chips.


Cube-based Computing Engine, which is an operator development tool oriented to hardware architecture programming.


Cube-based Computing Engine C, which is C code developed by the CCE.


MindSpore model training check point, which is used to save model parameters for inference or retraining.


An open-source image data set that contains 60000 32 x 32 color images of 10 categories, with 6000 images of each category. There are 50000 training images and 10000 test images.


An open-source image data set that contains 100 categories. Each category contains 600 images. Each course has 500 training images and 100 test images.


DaVinci architecture, Huawei-developed new chip architecture.


Euler operating system, which is developed by Huawei based on the standard Linux kernel.

FC Layer

Fully connected layer, which acts as a classifier in the entire convolutional neural network.


Fusion Engine, which connects to GE and TBE operators and has the capabilities of loading and managing the operator information library and managing convergence rules.


16-bit floating point, which is a half-precision floating point arithmetic format, consuming less memory.


32-bit floating point, which is a single-precision floating point arithmetic format.


Graph Engine, MindSpore computational graph execution engine, which is responsible for optimizing hardware (such as operator fusion and memory overcommitment) based on the front-end computational graph and starting tasks on the device side.


Graph High Level Optimization. GHLO includes optimization irrelevant to hardware (such as dead code elimination), auto parallel, and auto differentiation.


Graph Low Level Optimization. GLLO includes hardware-related optimization and in-depth optimization related to the combination of hardware and software, such as operator fusion and buffer fusion.

Graph Mode

MindSpore static graph mode. In this mode, the neural network model is compiled into an entire graph and then delivered for execution, featuring high performance.


Huawei Collective Communication Library, which implements multi-device and multi-card communication based on the Da Vinci architecture chip.


Image database organized based on the WordNet hierarchy (currently nouns only).


A classical convolutional neural network architecture proposed by Yann LeCun and others.


Difference between the predicted value and the actual value, which is a standard for determining the model quality of deep learning.


Long short-term memory, an artificial recurrent neural network (RNN) architecture used for processing and predicting an important event with a long interval and delay in a time sequence.


A data format file. Huawei ModelArt adopts this format. For details, see


Mind Expression, MindSpore frontend, which is used to compile tasks from user source code to computational graphs, control execution during training, maintain contexts (in non-sink mode), and dynamically generate graphs (in PyNative mode).


MindSpore security component, which is used for AI adversarial example management, AI model attack defense and enhancement, and AI model robustness evaluation.


MindSpore data framework, which provides data loading, enhancement, dataset management, and visualization.


MindSpore visualization component, which visualizes information such as scalars, images, computational graphs, and model hyperparameters.


Huawei-leaded open-source deep learning framework.

MindSpore Predict

A lightweight deep neural network inference engine that provides the inference function for models trained by MindSpore on the device side.

MNIST database

Modified National Handwriting of Images and Technology database, a large handwritten digit database, which is usually used to train various image processing systems.

PyNative Mode

MindSpore dynamic graph mode. In this mode, operators in the neural network are delivered and executed one by one, facilitating the compilation and debugging of the neural network model.


Residual Neural Network 50, a residual neural network proposed by four Chinese people, including Kaiming He from Microsoft Research Institute.


Data set structure definition file, which defines the fields contained in a data set and the field types.


An operator that monitors the values of tensors on the network. It is a peripheral operation in the figure and does not affect the data flow.


Tensor Boost Engine, an operator development tool that is extended based on the Tensor Virtual Machine (TVM) framework.


Data format defined by TensorFlow.