Class ModelParallelRunner
Defined in File model_parallel_runner.h
Class Documentation
-
class ModelParallelRunner
The ModelParallelRunner class is used to define a MindSpore ModelParallelRunner, facilitating Model management.
Public Functions
build a model parallel runner from model path so that it can run on a device. Supports importing the
msmodel (exported by theconverter_litetool) and themindirmodel (exported by MindSpore or exported by theconverter_litetool). The support for themsmodel will be removed in future iterations, and it is recommended to use themindirmodel for inference. When using themsmodel for inference, please keep the suffix name of the model as.ms, otherwise it will not be recognized.- 参数
model_path – [in] Define the model path.
runner_config – [in] Define the config used to store options during model pool init.
- 返回
Status.
build a model parallel runner from model buffer so that it can run on a device. This interface only supports passing in
mindirmodel file data.- 参数
model_data – [in] Define the buffer read from a model file.
data_size – [in] Define bytes number of model buffer.
runner_config – [in] Define the config used to store options during model pool init.
- 返回
Status.
-
std::vector<MSTensor> GetInputs()
Obtains all input tensors information of the model.
- 返回
The vector that includes all input tensors.
-
std::vector<MSTensor> GetOutputs()
Obtains all output tensors information of the model.
- 返回
The vector that includes all output tensors.
-
Status Predict(const std::vector<MSTensor> &inputs, std::vector<MSTensor> *outputs, const MSKernelCallBack &before = nullptr, const MSKernelCallBack &after = nullptr)
Inference ModelParallelRunner.
- 参数
inputs – [in] A vector where model inputs are arranged in sequence.
outputs – [out] Which is a pointer to a vector. The model outputs are filled in the container in sequence.
before – [in] CallBack before predict.
after – [in] CallBack after predict.
- 返回
Status.