mindspore.set_context

View Source On Gitee
mindspore.set_context(**kwargs)[source]

Set context for running environment, this interface will be deprecated in future versions, and its parameter-related functionalities will be provided through new APIs.

Parameters
  • mode (int) – GRAPH_MODE(0) or PYNATIVE_MODE(1). Default PYNATIVE_MODE .

  • device_id (int) – ID of the target device. Default 0 . This parameter will be deprecated and removed in future versions. Please use the api mindspore.set_device() instead.

  • device_target (str) – The target device to run, support "Ascend", "GPU", and "CPU". This parameter will be deprecated and removed in future versions. Please use the api mindspore.set_device() instead.

  • deterministic (str) – Deterministic computation of operators. Default "OFF" . This parameter will be deprecated and removed in future versions. Please use the api mindspore.set_deterministic() instead.

  • max_call_depth (int) – The maximum depth of function call. Default 1000 . This parameter will be deprecated and removed in a future version. Please use the api mindspore.set_recursion_limit() instead.

  • variable_memory_max_size (str) – This parameter will be deprecated and removed in future versions. Please use the api mindspore.runtime.set_memory() instead.

  • mempool_block_size (str) – Set the size of the memory pool block for devices. Default "1GB" . This parameter will be deprecated and removed in future versions. Please use the api mindspore.runtime.set_memory() instead.

  • memory_optimize_level (str) – The memory optimize level. Default "O0". This parameter will be deprecated and removed in future versions. Please use the api mindspore.runtime.set_memory() instead.

  • max_device_memory (str) – Set the maximum memory available for devices. Default "1024GB" . This parameter will be deprecated and removed in future versions. Please use the api mindspore.runtime.set_memory() instead.

  • pynative_synchronize (bool) – Whether to enable synchronous execution of the device in PyNative mode. Default False . This parameter will be deprecated and removed in future versions.Please use the api mindspore.runtime.launch_blocking() instead.

  • compile_cache_path (str) – Path to save the compile cache. Default ".". This parameter will be deprecated and removed in a future version. Please use the environment variable MS_COMPILER_CACHE_PATH instead.

  • inter_op_parallel_num (int) – The thread number of op parallel at the same time. Default 0 . This parameter will be deprecated and removed in future versions. Please use the api mindspore.runtime.dispatch_threads_num() instead.

  • memory_offload (str) – Whether to enable the memory offload function. Default "OFF" . This parameter will be deprecated and removed in future versions. Please use the api mindspore.nn.Cell.offload() instead.

  • disable_format_transform (bool) – Whether to disable the automatic format transform function from NCHW to NHWC. Default False . This parameter will be deprecated and removed in future versions. Please use the related parameter of mindspore.jit() instead.

  • jit_syntax_level (int) – Set JIT syntax support level. Default LAX . This parameter is deprecated and removed in future versions. Please use the related parameter of mindspore.jit() instead.

  • jit_config (dict) – Set the global jit config for compile. This parameter is deprecated and removed in future versions. Please use the related parameter of mindspore.jit() instead.

  • exec_order (str) – The sorting method for operator execution. This parameter is deprecated and removed in future versions. Please use the related parameter of mindspore.jit() instead.

  • op_timeout (int) – Set the maximum duration of executing an operator in seconds. Default 900 . This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.ascend.op_debug.execute_timeout() instead.

  • aoe_tune_mode (str) – AOE tuning mode. This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.ascend.op_tuning.aoe_tune_mode() instead.

  • aoe_config (dict) – AOE-specific parameters. This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.ascend.op_tuning.aoe_job_type() instead.

  • runtime_num_threads (int) – The thread pool number of cpu kernel used in runtime. Default 30 . This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.cpu.op_tuning.threads_num() instead.

  • save_graphs (bool or int) – Whether to save intermediate compilation graphs. Default 0 . This parameter will be deprecated and removed in a future version. Please use the environment variable MS_DEV_SAVE_GRAPHS instead.

  • save_graphs_path (str) – Path to save graphs. Default ".". This parameter will be deprecated and removed in a future version. Please use the environment variable MS_DEV_SAVE_GRAPHS_PATH instead.

  • precompile_only (bool) – Whether to only precompile the network. Default False . This parameter will be deprecated and removed in a future version. Please use the environment variable MS_DEV_PRECOMPILE_ONLY instead.

  • enable_compile_cache (bool) – Whether to save or load the compiled cache of the graph. Default False . This is an experimental prototype that is subject to change and/or deletion. This parameter will be deprecated and removed in a future version. Please use the environment variable MS_COMPILER_CACHE_ENABLE instead.

  • ascend_config (dict) –

    Set the parameters specific to Ascend hardware platform.

    • precision_mode (str): Mixed precision mode setting. Default "force_fp16" . This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.ascend.op_precision.precision_mode() instead.

    • jit_compile (bool): Whether to select online compilation. This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.ascend.op_tuning.op_compile() instead.

    • matmul_allow_hf32 (bool): Whether to convert FP32 to HF32 for Matmul operators. Default False. This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.ascend.op_precision.matmul_allow_hf32() instead.

    • conv_allow_hf32 (bool): Whether to convert FP32 to HF32 for Conv operators. Default True. This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.ascend.op_precision.conv_allow_hf32() instead.

    • op_precision_mode (str): Path to config file of op precision mode. This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.ascend.op_precision.op_precision_mode() instead.

    • op_debug_option (str): Enable debugging options for Ascend operators. This parameter will be deprecated and removed in future versions. Please use the api mindspore.device_context.ascend.op_debug.debug_option() instead.

    • ge_options (dict): Set options for CANN. This parameter will be deprecated and removed in future versions. Please use the related parameter of mindspore.jit() instead.

    • atomic_clean_policy (int): The policy for cleaning memory occupied by atomic operators in the network. Default 1 represents that memory is not cleaned centrally, 0 represents that memory is cleaned centrally. This parameter will be deprecated and removed in future versions. Please use the related parameter of mindspore.jit() instead.

    • exception_dump (str): Enable Ascend operator exception dump. Default "2" . This parameter has been deprecated and removed. Please use the api mindspore.device_context.ascend.op_debug.aclinit_config() instead.

    • host_scheduling_max_threshold(int): The max threshold to control whether the dynamic shape process is used when run the static graph. Default 0 . This parameter will be deprecated and removed in future versions. Please use the related parameter of mindspore.jit() instead.

    • parallel_speed_up_json_path(Union[str, None]): The path to the parallel speed up json file. This parameter will be deprecated and removed in future versions. Please use the api mindspore.parallel.auto_parallel.AutoParallel.transformer_opt() instead.

    • hccl_watchdog (bool): Enable a thread to monitor the failure of collective communication. Default True .

  • gpu_config (dict) –

    Set the parameters specific to gpu hardware platform. It is not set by default.

  • print_file_path (str) – This parameter will be deprecated and removed in future versions.

  • env_config_path (str) – This parameter will be deprecated and removed in future versions.

  • debug_level (int) – This parameter will be deprecated and removed in future versions.

  • reserve_class_name_in_scope (bool) – This parameter will be deprecated and removed in future versions.

  • check_bprop (bool) – This parameter will be deprecated and removed in future versions.

  • enable_reduce_precision (bool) – This parameter will be deprecated and removed in a future versions.

  • grad_for_scalar (bool) – This parameter will be deprecated and removed in future versions.

  • support_binary (bool) – Whether to support run .pyc or .so in graph mode.

Examples

>>> import mindspore as ms
>>> ms.set_context(mode=ms.PYNATIVE_MODE)
>>> ms.set_context(precompile_only=True)
>>> ms.set_context(device_target="Ascend")
>>> ms.set_context(device_id=0)
>>> ms.set_context(save_graphs=True, save_graphs_path="./model.ms")
>>> ms.set_context(enable_reduce_precision=True)
>>> ms.set_context(reserve_class_name_in_scope=True)
>>> ms.set_context(variable_memory_max_size="6GB")
>>> ms.set_context(aoe_tune_mode="online")
>>> ms.set_context(aoe_config={"job_type": "2"})
>>> ms.set_context(check_bprop=True)
>>> ms.set_context(max_device_memory="3.5GB")
>>> ms.set_context(mempool_block_size="1GB")
>>> ms.set_context(print_file_path="print.pb")
>>> ms.set_context(max_call_depth=80)
>>> ms.set_context(env_config_path="./env_config.json")
>>> ms.set_context(grad_for_scalar=True)
>>> ms.set_context(enable_compile_cache=True, compile_cache_path="./cache.ms")
>>> ms.set_context(pynative_synchronize=True)
>>> ms.set_context(runtime_num_threads=10)
>>> ms.set_context(inter_op_parallel_num=4)
>>> ms.set_context(disable_format_transform=True)
>>> ms.set_context(memory_optimize_level='O0')
>>> ms.set_context(memory_offload='ON')
>>> ms.set_context(deterministic='ON')
>>> ms.set_context(ascend_config={"precision_mode": "force_fp16", "jit_compile": True,
...                "atomic_clean_policy": 1, "op_precision_mode": "./op_precision_config_file",
...                "op_debug_option": "oom",
...                "ge_options": {"global": {"ge.opSelectImplmode": "high_precision"},
...                               "session": {"ge.exec.atomicCleanPolicy": "0"}}})
>>> ms.set_context(jit_syntax_level=ms.STRICT)
>>> ms.set_context(debug_level=ms.context.DEBUG)
>>> ms.set_context(gpu_config={"conv_fprop_algo": "performance", "conv_allow_tf32": True,
...                "matmul_allow_tf32": True})
>>> ms.set_context(jit_config={"jit_level": "O0"})
>>> ms.set_context(exec_order="bfs")