Release Notes

MindSpore Lite 2.0.0-rc1 Release Notes

Major Features and Improvements

MindSpore Lite Cloud Inference

The original MindSpore Lite is mainly used for edge devices such as mobile phones and head units. Cloud inference is added to support scenarios with multiple backend hardware resources on the cloud, supports Ascend and NVIDIA GPU inference cards, and efficiently utilizes multi-core resources on the cloud.

The original cloud inference integrated through MindSpore training can be changed to MindSpore Lite. For details, see Quick Start to Cloud-side Inference. To retain the original integration method, see Inference.

  • [STABLE] Support MindIR model files.

  • [STABLE] Third-party Onnx, TensorFlow, and Caffe models can be converted to MindIR model files using the MindSpore Lite conversion tool.

  • [STABLE] One release package supports multiple hardware backends: Ascend 310/310P/910, NVIDIA GPU, CPU.

  • [STABLE] Supports the Model interface and ModelParallelRunner concurrent inference interface.

  • [STABLE] Supports C++, Python, and Java inference interfaces.

API

  • Due to the defects of the original Python API that many configuration parameters and complex usage, the usability of The Python APIs are optimized in version 2.0. The optimizations include class construction methods and class attribute adjustment. In addition, the Python APIs in version 2.0 and later will be integrated into the cloud-side inference scenario, which are incompatible with Python APIs of the earlier versions. For details, see Python API.