Environment Variables

Linux Ascend GPU CPU Beginner Intermediate Expert

View Source On Gitee

MindSpore environment variables are as follows:

Environment Variable

Module

Function

Type

Value Range

Configuration Relationship

Mandatory or Not

MS_ENABLE_CACHE

MindData

Determines whether to enable the cache function for datasets during data processing to accelerate dataset reading and argumentation processing.

String

TRUE: enables the cache function during data processing.
FALSE: disables the cache function during data processing.

This variable is used together with MS_CACHE_HOST and MS_CACHE_PORT.

Optional

MS_CACHE_HOST

MindData

Specifies the IP address of the host where the cache server is located when the cache function is enabled.

String

IP address of the host where the cache server is located.

This variable is used together with MS_ENABLE_CACHE=TRUE and MS_CACHE_PORT.

Optional

MS_CACHE_PORT

MindData

Specifies the port number of the host where the cache server is located when the cache function is enabled.

String

Port number of the host where the cache server is located.

This variable is used together with MS_ENABLE_CACHE=TRUE and MS_CACHE_HOST.

Optional

PROFILING_MODE

MindData

Determines whether to enable dataset profiling for performance analysis. This variable is used together with MindInsight where the time consumed in each phase can be displayed.

String

true: enables the profiling function.
false: disables the profiling function.

This variable is used together with MINDDATA_PROFILING_DIR.

Optional

MINDDATA_PROFILING_DIR

MindData

Specifies the system path for storing the dataset profiling result.

String

System path. A relative path is supported.

This variable is used together with PROFILING_MODE=true.

Optional

DATASET_ENABLE_NUMA

MindData

Determines whether to enable numa bind feature. Most of time this configuration can improve performance on distribute scenario.

String

True: Enables the numa bind feature.

This variable is used together with libnuma.so.

Optional

OPTIMIZE

MindData

Determines whether to optimize the pipeline tree for dataset during data processing. This variable can improve the data processing efficiency in the data processing operator fusion scenario.

String

true: enables pipeline tree optimization.
false: disables pipeline tree optimization.

None

Optional

ENABLE_MS_DEBUGGER

Debugger

Determines whether to enable Debugger during training.

Boolean

1: enables Debugger.
0: disables Debugger.

None

Optional

MS_DEBUGGER_PORT

Debugger

Specifies the port for connecting to the MindInsight Debugger Server.

Integer

Port number ranges from 1 to 65536.

None

Optional

MS_DEBUGGER_PARTIAL_MEM

Debugger

Determines whether to enable partial memory overcommitment. (Memory overcommitment is disabled only for nodes selected on Debugger.)

Boolean

1: enables memory overcommitment for nodes selected on Debugger.
0: disables memory overcommitment for nodes selected on Debugger.

None

Optional

MS_BUILD_PROCESS_NUM

MindSpore

Specifies the number of parallel operator build processes during Ascend backend compilation.

Integer

The number of parallel operator build processes ranges from 1 to 24.

None

Optional

RANK_TABLE_FILE

MindSpore

Specifies the file to which a path points, including DEVICE_IPs corresponding to multiple Ascend AI Processor DEVICE_IDs.

String

File path, which can be a relative path or an absolute path.

This variable is used together with RANK_SIZE.

Mandatory (when the Ascend AI Processor is used)

RANK_SIZE

MindSpore

Specifies the number of Ascend AI Processors to be called during deep learning.

Integer

The number of Ascend AI Processors to be called ranges from 1 to 8.

This variable is used together with RANK_TABLE_FILE

Mandatory (when the Ascend AI Processor is used)

RANK_ID

MindSpore

Specifies the logical ID of the Ascend AI Processor called during deep learning.

Integer

The value ranges from 0 to 7. When multiple servers are running concurrently, DEVICE_IDs in different servers may be the same. RANK_ID can be used to avoid this problem. (RANK_ID = SERVER_ID * DEVICE_NUM + DEVICE_ID)

None

Optional

MS_SUBMODULE_LOG_v

MindSpore

For details about the function and usage, see MS_SUBMODULE_LOG_v

Dict{String:Integer…}

LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR…

None

Optional

OPTION_PROTO_LIB_PATH

MindSpore

Specifies the RPOTO dependent library path.

String

File path, which can be a relative path or an absolute path.

None

Optional

MS_RDR_ENABLE

MindSpore

Determines whether to enable running data recorder (RDR). If a running exception occurs in MindSpore, the pre-recorded data in MindSpore is automatically exported to assist in locating the cause of the running exception.

Integer

1:enables RDR
0:disables RDR

This variable is used together with MS_RDR_PATH

Optional

MS_RDR_PATH

MindSpore

Specifies the system path for storing the data recorded by running data recorder (RDR).

String

Directory path, which should be an absolute path.

This variable is used together with MS_RDR_ENABLE=1

Optional

GE_USE_STATIC_MEMORY

GraphEngine

When a network model has too many layers, the intermediate computing data of a feature map may exceed 25 GB, for example, on the BERT24 network. In the multi-device scenario, to ensure efficient memory collaboration, set this variable to 1, indicating that static memory allocation mode is used. For other networks, dynamic memory allocation mode is used by default.
In static memory allocation mode, the default allocation is 31 GB, which is determined by the sum of graph_memory_max_size and variable_memory_max_size. In dynamic memory allocation mode, the allocation is within the sum of graph_memory_max_size and variable_memory_max_size.

Integer

1: static memory allocation mode
0: dynamic memory allocation mode

None

Optional

DUMP_GE_GRAPH

GraphEngine

Outputs the graph description information of each phase in the entire process to a file. This environment variable controls contents of the dumped graph.

Integer

1: full dump
2: basic dump without data such as weight
3: simplified dump with only node relationships displayed

None

Optional

DUMP_GRAPH_LEVEL

GraphEngine

Outputs the graph description information of each phase in the entire process to a file. This environment variable controls the number of dumped graphs.

Integer

1: dumps all graphs.
2: dumps all graphs except subgraphs.
3: dumps the last generated graph.

None

Optional