mindspore.communication
Collective communication interface.
Note that the APIs in the following list need to preset communication environment variables.
For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.
World communication information. |
|
Initialize distributed backends required by communication services, e.g. |
|
Release distributed resource. |
|
Create a user collective communication group. |
|
Destroy the user collective communication group. |
|
Get the communicator name of the specified collective communication group. |
|
Get the rank size of the specified collective communication group. |
|
Get the rank ID in the specified user communication group corresponding to the rank ID in the world communication group. |
|
Gets local rank ID for current device in specified collective communication group. |
|
Gets local rank size of the specified collective communication group. |
|
Gets the ranks of the specific group and returns the process ranks in the communication group as a list. |
|
Get the rank ID for the current device in the specified collective communication group. |
|
Get the rank ID in the world communication group corresponding to the rank ID in the specified user communication group. |
|
str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str |
|
str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str |
|
str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str |
mindspore.communication.comm_func
Collection communication functional interface
Gathers tensors from the specified communication group and returns the tensor which is all gathered. |
|
Reduce tensors across all devices in such a way that all deviceswill get the same final result, returns the tensor which is all reduced. |
|
|
Based on the slice size of the user input, the input tensor is sliced and sent to other devices and receives the sliced chunks from the other devices, which are then merged into an output Tensor. |
|
scatter and gather list of tensor to/from all rank according to input/output tensor list. |
Synchronizes all processes in the specified group. |
|
Batch send and recv tensors asynchronously. |
|
Broadcasts the tensor to the whole group. |
|
Gathers tensors from the specified communication group. |
|
Receive tensors from src asynchronously. |
|
Send tensors to the specified dest_rank asynchronously. |
|
Receive tensors from src. |
|
Send tensors to the specified dest_rank. |
|
Object for batch_isend_irecv input, to store information of |
|
Reduces tensors across the processes in the specified communication group, sends the result to the target dst(global rank), and returns the tensor which is sent to the target process. |
|
Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered. |
|
Scatter tensor evently across the processes in the specified communication group. |