mindspore.communication

Collective communication interface.

Note that the APIs in the following list need to preset communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

mindspore.communication.GlobalComm

World communication information.

mindspore.communication.init

Initialize distributed backends required by communication services, e.g.

mindspore.communication.release

Release distributed resource.

mindspore.communication.create_group

Create a user collective communication group.

mindspore.communication.destroy_group

Destroy the user collective communication group.

mindspore.communication.get_comm_name

Get the communicator name of the specified collective communication group.

mindspore.communication.get_group_size

Get the rank size of the specified collective communication group.

mindspore.communication.get_group_rank_from_world_rank

Get the rank ID in the specified user communication group corresponding to the rank ID in the world communication group.

mindspore.communication.get_local_rank

Gets local rank ID for current device in specified collective communication group.

mindspore.communication.get_local_rank_size

Gets local rank size of the specified collective communication group.

mindspore.communication.get_process_group_ranks

Gets the ranks of the specific group and returns the process ranks in the communication group as a list.

mindspore.communication.get_rank

Get the rank ID for the current device in the specified collective communication group.

mindspore.communication.get_world_rank_from_group_rank

Get the rank ID in the world communication group corresponding to the rank ID in the specified user communication group.

mindspore.communication.HCCL_WORLD_COMM_GROUP

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

mindspore.communication.NCCL_WORLD_COMM_GROUP

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

mindspore.communication.MCCL_WORLD_COMM_GROUP

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

mindspore.communication.comm_func

Collection communication functional interface

mindspore.communication.comm_func.all_gather_into_tensor

Gathers tensors from the specified communication group and returns the tensor which is all gathered.

mindspore.communication.comm_func.all_reduce

Reduce tensors across all devices in such a way that all deviceswill get the same final result, returns the tensor which is all reduced.

mindspore.communication.comm_func.all_to_all_single_with_output_shape

Based on the slice size of the user input, the input tensor is sliced and sent to other devices and receives the sliced chunks from the other devices, which are then merged into an output Tensor.

mindspore.communication.comm_func.all_to_all_with_output_shape

scatter and gather list of tensor to/from all rank according to input/output tensor list.

mindspore.communication.comm_func.barrier

Synchronizes all processes in the specified group.

mindspore.communication.comm_func.batch_isend_irecv

Batch send and recv tensors asynchronously.

mindspore.communication.comm_func.broadcast

Broadcasts the tensor to the whole group.

mindspore.communication.comm_func.gather_into_tensor

Gathers tensors from the specified communication group.

mindspore.communication.comm_func.irecv

Receive tensors from src asynchronously.

mindspore.communication.comm_func.isend

Send tensors to the specified dest_rank asynchronously.

mindspore.communication.comm_func.recv

Receive tensors from src.

mindspore.communication.comm_func.send

Send tensors to the specified dest_rank.

mindspore.communication.comm_func.P2POp

Object for batch_isend_irecv input, to store information of "isend" and "irecv".

mindspore.communication.comm_func.reduce

Reduces tensors across the processes in the specified communication group, sends the result to the target dst(global rank), and returns the tensor which is sent to the target process.

mindspore.communication.comm_func.reduce_scatter_tensor

Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered.

mindspore.communication.comm_func.scatter_tensor

Scatter tensor evently across the processes in the specified communication group.