mindspore_lite
Context
Context is used to store environment variables during execution. |
|
DeviceInfo base class. |
|
Helper class to set cpu device info, and it inherits DeviceInfo base class. |
|
Helper class to set gpu device info, and it inherits DeviceInfo base class. |
|
Helper class to set Ascend device infos, and it inherits DeviceInfo base class. |
Converter
- class mindspore_lite.FmkType
When converting a third-party or MindSpore model to a MindSpore Lite model, FmkType defines Input model’s framework type.
For details, see FmkType. Run the following command to import the package:
from mindspore_lite import FmkType
Type
Currently, the following third-party model framework types are supported:
TFtype,CAFFEtype,ONNXtype,MINDIRtype,TFLITEtype andPYTORCHtype. The following table lists the details.Definition
Description
FmkType.TFTensorFlow model’s framework type, and the model uses .pb as suffix
FmkType.CAFFECaffe model’s framework type, and the model uses .prototxt as suffix
FmkType.ONNXONNX model’s framework type, and the model uses .onnx as suffix
FmkType.MINDIRMindSpore model’s framework type, and the model uses .mindir as suffix
FmkType.TFLITETensorFlow Lite model’s framework type, and the model uses .tflite as suffix
FmkType.PYTORCHPyTorch model’s framework type, and the model uses .pt or .pth as suffix
Converter is used to convert third-party models. |
Model
- class mindspore_lite.ModelType
When loading or building a model from file, ModelType defines the type of input model file.
For details, see ModelType. Run the following command to import the package:
from mindspore_lite import ModelType
Type
Currently, the following type of input model file are supported:
ModelType.MINDIRtype andModelType.MINDIR_LITEtype. The following table lists the details.Definition
Description
ModelType.MINDIRMindSpore model’s type, which model uses .mindir as suffix
ModelType.MINDIR_LITEMindSpore Lite model’s type, which model uses .ms as suffix
The Model class is used to define a MindSpore model, facilitating computational graph management. |
|
RunnerConfig Class defines runner config of one or more servables. |
|
The ModelParallelRunner class is used to define a MindSpore ModelParallelRunner, facilitating Model management. |
Tensor
- class mindspore_lite.DataType
Create a data type object of MindSporeLite.
For details, see DataType. Run the following command to import the package:
from mindspore_lite import DataType
Type
Currently, MindSpore Lite supports
Inttype,Uinttype andFloattype. The following table lists the details.Definition
Description
DataType.UNKNOWNNo matching any of the following known types.
DataType.BOOLBoolean
TrueorFalseDataType.INT88-bit integer
DataType.INT1616-bit integer
DataType.INT3232-bit integer
DataType.INT6464-bit integer
DataType.UINT8unsigned 8-bit integer
DataType.UINT16unsigned 16-bit integer
DataType.UINT32unsigned 32-bit integer
DataType.UINT64unsigned 64-bit integer
DataType.FLOAT1616-bit floating-point number
DataType.FLOAT3232-bit floating-point number
DataType.FLOAT6464-bit floating-point number
DataType.INVALIDThe maximum threshold value of DataType to prevent invalid types, corresponding to the INT32_MAX in C++.
Usage
Since mindspore_lite.Tensor in Python API directly wraps C++ API with pybind11 technology, DataType has a one-to-one correspondence between the Python API and the C++ API, and the way to modify DataType is in the set and to get methods of the tensor class. These include:
set_data_type: Query in data_type_py_cxx_map with DataType in Python API as key, and get DataType in C++ API, pass it to set_data_type method in C++ API.
get_data_type: Get DataType in C++ API by get_data_type method in C++ API, Query in data_type_cxx_py_map with DataType in C++ API as key, return DataType in Python API.
Here is an example:
from mindspore_lite import DataType from mindspore_lite import Tensor tensor = Tensor() tensor.set_data_type(DataType.FLOAT32) data_type = tensor.get_data_type() print(data_type)
The result is as follows:
DataType.FLOAT32
- class mindspore_lite.Format
MindSpore Lite’s
tensortype. For example: Format.NCHW.For details, see Format. Run the following command to import the package:
from mindspore_lite import Format
Type
See the following table for supported formats:
Definition
Description
Format.DEFAULTdefault format
Format.NCHWStore tensor data in the order of batch N, channel C, height H and width W
Format.NHWCStore tensor data in the order of batch N, height H, width W and channel C
Format.NHWC4C-axis 4-byte aligned Format.NHWC
Format.HWKCStore tensor data in the order of height H, width W, kernel num K and channel C
Format.HWCKStore tensor data in the order of height H, width W, channel C and kernel num K
Format.KCHWStore tensor data in the order of kernel num K, channel C, height H and width W
Format.CKHWStore tensor data in the order of channel C, kernel num K, height H and width W
Format.KHWCStore tensor data in the order of kernel num K, height H, width W and channel C
Format.CHWKStore tensor data in the order of channel C, height H, width W and kernel num K
Format.HWStore tensor data in the order of height H and width W
Format.HW4w-axis 4-byte aligned Format.HW
Format.NCStore tensor data in the order of batch N and channel C
Format.NC4C-axis 4-byte aligned Format.NC
Format.NC4HW4C-axis 4-byte aligned and W-axis 4-byte aligned Format.NCHW
Format.NCDHWStore tensor data in the order of batch N, channel C, depth D, height H and width W
Format.NWCStore tensor data in the order of batch N, width W and channel C
Format.NCWStore tensor data in the order of batch N, channel C and width W
Format.NDHWCStore tensor data in the order of batch N, depth D, height H, width W and channel C
Format.NC8HW8C-axis 8-byte aligned and W-axis 8-byte aligned Format.NCHW
Usage
Since mindspore_lite.Tensor in Python API directly wraps C++ API with pybind11 technology, Format has a one-to-one correspondence between the Python API and the C++ API, and the way to modify Format is in the set and get methods of the tensor class. These includes:
set_format: Query in format_py_cxx_map with Format in Python API as key, and get Format in C++ API, pass it to set_format method in C++ API.
get_format: Get Format in C++ API by get_format method in C++ API, Query in format_cxx_py_map with Format in C++ API as key, return Format in Python API.
Here is an example:
from mindspore_lite import Format from mindspore_lite import Tensor tensor = Tensor() tensor.set_format(Format.NHWC) tensor_format = tensor.get_format() print(tensor_format)
The result is as follows:
Format.NHWC
The Tensor class defines a tensor in MindSporeLite. |