MindSpore Lite
MindSpore Lite provides lightweight AI inference acceleration capabilities for different hardware devices, enabling intelligent applications. It provides end-to-end solutions for developers, offering development friendly, efficient, and flexible deployment experiences for algorithm engineers and data scientists. This empowers the flourishing development of the AI software and hardware application ecosystem.

Advantages of MindSpore Lite
Ultimate Performance
With efficient kernel algorithms and assembly-level optimization, MindSpore Lite supports heterogeneous scheduling of CPUs, GPUs, and NPUs, fully unleashing hardware computing power and minimizing inference latency and power consumption.
Lightweight
Provides ultra-lightweight solutions. With model quantization compression, the models are smaller and run faster, enabling AI model deployment in extreme environments.
All-scenario Support
Supports mobile phone OSs (iOS, Android, and LiteOS) and AI applications of various intelligent devices including mobile phones, large screens, tablets, and IoT.
Efficient Deployment
With modal compression, data processing, and unified intermediate representations (IRs) for training and inference, MindSpore Lite is compatible with MindSpore, TensorFlow Lite, Caffe, and ONNX models, facilitating quick deployment.
MindSpore Lite Workflow
Model Selection
Select a new model or retrain an existing model.
Model Conversion
Use a tool to convert a model into an on-device model that is easy to deploy.
Application Deployment
Introduce a model to an application and load the model to a mobile or embedded device.
Common Scenarios
Image Classification
Classify image content (e.g., animals, goods, natural landscapes, etc.) using a classification model.
Downstream Community
Huawei Developers
MindSpore Lite Kit is a lightweight AI engine built into HarmonyOS NEXT to help you build full-scenario intelligent applications.