Class Serialization
- Defined in File serialization.h 
Class Documentation
- 
class Serialization
- The Serialization class is used to summarize methods for reading and writing model files. - Public Static Functions - 
static inline Status Load(const void *model_data, size_t data_size, ModelType model_type, Graph *graph, const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm)
- Loads a model file from memory buffer. - 参数
- model_data – [in] A buffer filled by model file. 
- data_size – [in] The size of the buffer. 
- model_type – [in] The Type of model file, options are ModelType::kMindIR, ModelType::kOM. 
- graph – [out] The output parameter, an object saves graph data. 
- dec_key – [in] The decryption key, key length is 16, 24, or 32. Not supported on MindSpore Lite. 
- dec_mode – [in] The decryption mode, optional options are AES-GCM, AES-CBC. Not supported on MindSpore Lite. 
 
- 返回
- Status. 
 
 - 
static inline Status Load(const std::string &file, ModelType model_type, Graph *graph, const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm)
- Loads a model file from path. - 参数
- file – [in] The path of model file. 
- model_type – [in] The Type of model file, options are ModelType::kMindIR, ModelType::kOM. 
- graph – [out] The output parameter, an object saves graph data. 
- dec_key – [in] The decryption key, key length is 16, 24, or 32. Not supported on MindSpore Lite. 
- dec_mode – [in] The decryption mode, optional options are AES-GCM, AES-CBC. Not supported on MindSpore Lite. 
 
- 返回
- Status. 
 
 - 
static inline Status Load(const std::vector<std::string> &files, ModelType model_type, std::vector<Graph> *graphs, const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm)
- Load multiple models from multiple files, MindSpore Lite does not provide this feature. - 参数
- files – [in] The path of model files. 
- model_type – [in] The Type of model file, options are ModelType::kMindIR, ModelType::kOM. 
- graphs – [out] The output parameter, an object saves graph data. 
- dec_key – [in] The decryption key, key length is 16, 24, or 32. 
- dec_mode – [in] The decryption mode, optional options are AES-GCM, AES-CBC. 
 
- 返回
- Status. 
 
 - 
static inline Status SetParameters(const std::map<std::string, Buffer> ¶meters, Model *model)
- Configure model parameters, MindSpore Lite does not provide this feature. - 参数
- parameters – [in] The parameters. 
- model – [in] The model. 
 
- 返回
- Status. 
 
 - 
static inline Status ExportModel(const Model &model, ModelType model_type, Buffer *model_data, QuantizationType quantization_type = kNoQuant, bool export_inference_only = true, const std::vector<std::string> &output_tensor_name = {})
- Export training model from memory buffer, MindSpore Lite does not provide this feature. - 参数
- model – [in] The model data. 
- model_type – [in] The model file type. 
- model_data – [out] The model buffer. 
- quantization_type – [in] The quantification type. 
- export_inference_only – [in] Whether to export a reasoning only model. 
- output_tensor_name – [in] The set the name of the output tensor of the exported reasoning model, default as empty, and export the complete reasoning model. 
 
- 返回
- Status. 
 
 - 
static inline Status ExportModel(const Model &model, ModelType model_type, const std::string &model_file, QuantizationType quantization_type = kNoQuant, bool export_inference_only = true, std::vector<std::string> output_tensor_name = {})
- Export training model from file. - 参数
- model – [in] The model data. 
- model_type – [in] The model file type. 
- model_file – [in] The path of exported model file. 
- quantization_type – [in] The quantification type. 
- export_inference_only – [in] Whether to export a reasoning only model. 
- output_tensor_name – [in] The set the name of the output tensor of the exported reasoning model, default as empty, and export the complete reasoning model. 
 
- 返回
- Status. 
 
 - 
static inline Status ExportWeightsCollaborateWithMicro(const Model &model, ModelType model_type, const std::string &weight_file, bool is_inference = true, bool enable_fp16 = false, const std::vector<std::string> &changeable_weights_name = {})
- Experimental feature. Export model's weights, which can be used in micro only. - 参数
- model – [in] The model data. 
- model_type – [in] The model file type. 
- weight_file – [in] The path of exported weight file. 
- is_inference – [in] Whether to export weights from a reasoning model. Currently, only support this is - true.
- enable_fp16 – [in] Float-weight is whether to be saved in float16 format. 
- changeable_weights_name – [in] The set the name of these weight tensors, whose shape is changeable. 
 
- 返回
- Status. 
 
 
- 
static inline Status Load(const void *model_data, size_t data_size, ModelType model_type, Graph *graph, const Key &dec_key = {}, const std::string &dec_mode = kDecModeAesGcm)