Class Context

Class Documentation

class Context

Context is used to store environment variables during execution.

Public Functions

void SetThreadNum(int32_t thread_num)

Set the number of threads at runtime.

参数

thread_num[in] the number of threads at runtime.

int32_t GetThreadNum() const

Get the current thread number setting.

返回

The current thread number setting.

void SetGroupInfoFile(std::string group_info_file)

Set the communication group info file path.

参数

group_info_file[in] communication group info file for distributed inference.

std::string GetGroupInfoFile() const

Get the communication group info file path.

返回

The communication group info file path setting.

void SetInterOpParallelNum(int32_t parallel_num)

Set the parallel number of operators at runtime.

参数

parallel_num[in] the parallel number of operators at runtime.

int32_t GetInterOpParallelNum() const

Get the current operators parallel number setting.

返回

The current operators parallel number setting.

void SetThreadAffinity(int mode)

Set the thread affinity to CPU cores.

参数

mode[in] 0: no affinities, 1: big cores first, 2: little cores first

int GetThreadAffinityMode() const

Get the thread affinity of CPU cores.

返回

Thread affinity to CPU cores. 0: no affinities, 1: big cores first, 2: little cores first

void SetThreadAffinity(const std::vector<int> &core_list)

Set the thread lists to CPU cores.

说明

If core_list and mode are set by SetThreadAffinity at the same time, the core_list is effective, but the mode is not effective.

参数

core_list[in] a vector of thread core lists.

std::vector<int32_t> GetThreadAffinityCoreList() const

Get the thread lists of CPU cores.

返回

core_list: a vector of thread core lists.

void SetEnableParallel(bool is_parallel)

Set the status whether to perform model inference or training in parallel.

参数

is_parallel[in] true, parallel; false, not in parallel.

bool GetEnableParallel() const

Get the status whether to perform model inference or training in parallel.

返回

Bool value that indicates whether in parallel.

void SetBuiltInDelegate(DelegateMode mode)

Set built-in delegate mode to access third-party AI framework.

参数

mode[in] the built-in delegate mode.

DelegateMode GetBuiltInDelegate() const

Get the built-in delegate mode of the third-party AI framework.

返回

the built-in delegate mode.

void set_delegate(const std::shared_ptr<AbstractDelegate> &delegate)

Set Delegate to access third-party AI framework.

参数

delegate[in] the custom delegate.

std::shared_ptr<AbstractDelegate> get_delegate() const

Get the delegate of the third-party AI framework.

返回

Pointer to the custom delegate.

void SetMultiModalHW(bool float_mode)

Set quant model to run as float model in multi device.

参数

float_mode[in] true, run as float model; false, not run as float model.

bool GetMultiModalHW() const

Get the mode of the model run.

返回

Bool value that indicates whether run as float model

std::vector<std::shared_ptr<DeviceInfoContext>> &MutableDeviceInfo()

Get a mutable reference of DeviceInfoContext vector in this context. Only MindSpore Lite supports heterogeneous scenarios with multiple members in the vector.

返回

Mutable reference of DeviceInfoContext vector in this context.