mindspore.communication.comm_func.broadcast

View Source On AtomGit
mindspore.communication.comm_func.broadcast(tensor, src=0, group=GlobalComm.WORLD_COMM_GROUP)[source]

Broadcasts the tensor to the whole group.

Note

  • The tensors must have the same shape and format in all processes of the collection.

  • Only support PyNative mode, Graph mode is not currently supported.

Parameters
  • tensor (Tensor) – The tensor to be broadcasted. The shape of tensor is \((x_1, x_2, ..., x_R)\).

  • src (int, optional) – Specifies the rank(global rank) of the process that broadcast the tensor. And only process src will broadcast the tensor.

  • group (str, optional) – The communication group to work on. Default: GlobalComm.WORLD_COMM_GROUP, which means "hccl_world_group" in Ascend, and "nccl_world_group" in GPU.

Returns

Tensor, tensor has the same shape as input tensor \((x_1, x_2, ..., x_R)\).

Raises
  • TypeError – If src is not an integer or group is not a string.

  • RuntimeError – If device target is invalid, or backend is invalid, or distributed initialization fails.

Supported Platforms:

Ascend GPU

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun startup for more details.

This example should be run with 2 devices.

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.communication as comm
>>>
>>> # Launch 2 processes.
>>>
>>> comm.init()
>>> data = ms.Tensor(np.arange(8).reshape([2, 4]).astype(np.float32))
>>> out = comm.comm_func.broadcast(tensor=data, src=0)
>>> print(out)
[[0. 1. 2. 3.]
 [4. 5. 6. 7.]]
Tutorial Examples: