Class Context

Class Documentation

class Context

Context is used to store environment variables during execution.

Public Functions

void SetThreadNum(int32_t thread_num)

Set the number of threads at runtime. Only valid for Lite.

Parameters

thread_num[in] the number of threads at runtime.

int32_t GetThreadNum() const

Get the current thread number setting. Only valid for Lite.

Returns

The current thread number setting.

void SetInterOpParallelNum(int32_t parallel_num)

Set the parallel number of operators at runtime. Only valid for Lite.

Parameters

parallel_num[in] the parallel number of operators at runtime.

int32_t GetInterOpParallelNum() const

Get the current operators parallel number setting. Only valid for Lite.

Returns

The current operators parallel number setting.

void SetThreadAffinity(int mode)

Set the thread affinity to CPU cores. Only valid for Lite.

Parameters

mode[in] 0: no affinities, 1: big cores first, 2: little cores first

int GetThreadAffinityMode() const

Get the thread affinity of CPU cores. Only valid for Lite.

Returns

Thread affinity to CPU cores. 0: no affinities, 1: big cores first, 2: little cores first

void SetThreadAffinity(const std::vector<int> &core_list)

Set the thread lists to CPU cores. Only valid for Lite.

Note

If core_list and mode are set by SetThreadAffinity at the same time, the core_list is effective, but the mode is not effective.

Parameters

core_list[in] a vector of thread core lists.

std::vector<int32_t> GetThreadAffinityCoreList() const

Get the thread lists of CPU cores. Only valid for Lite.

Returns

core_list: a vector of thread core lists.

void SetEnableParallel(bool is_parallel)

Set the status whether to perform model inference or training in parallel. Only valid for Lite.

Parameters

is_parallel[in] true, parallel; false, not in parallel.

bool GetEnableParallel() const

Get the status whether to perform model inference or training in parallel. Only valid for Lite.

Returns

Bool value that indicates whether in parallel.

void SetDelegate(const std::shared_ptr<Delegate> &delegate)

Set Delegate to access third-party AI framework. Only valid for Lite.

Parameters

delegate[in] the custom delegate.

std::shared_ptr<Delegate> GetDelegate() const

Get the delegate of the third-party AI framework. Only valid for Lite.

Returns

Pointer to the custom delegate.

void SetMultiModalHW(bool float_mode)

Set quant model to run as float model in multi device.

Parameters

float_mode[in] true, run as float model; false, not run as float model.

bool GetMultiModalHW() const

Get the mode of the model run.

Returns

Bool value that indicates whether run as float model

std::vector<std::shared_ptr<DeviceInfoContext>> &MutableDeviceInfo()

Get a mutable reference of DeviceInfoContext vector in this context. Only MindSpore Lite supports heterogeneous scenarios with multiple members in the vector.

Returns

Mutable reference of DeviceInfoContext vector in this context.