mindspore.ops.AlltoAllVC

View Source On Gitee
class mindspore.ops.AlltoAllVC(group=GlobalComm.WORLD_COMM_GROUP, block_size=1, transpose=False)[source]

AllToAllVC passes in the sending and receiving parameters of all ranks through the input parameter send_count_matrix. Compared to AllToAllV, AllToAllVC does not require the aggregation of all rank sending and receiving parameters, thus offering superior performance.

Note

Only one-dimensional input is supported; the input data must be flattened into a one-dimensional array before using this interface.

Parameters
  • group (str, optional) – The communication group to work on. Default: GlobalComm.WORLD_COMM_GROUP, which means "hccl_world_group" in Ascend.

  • block_size (int, optional) – The basic units for splitting and gathering numel by send_count_matrix. Default: 1.

  • transpose (bool, optional) – Indicates whether the send_count_matrix needs to undergo a transpose operation, this parameter is used in reverse calculation scenarios. Default: False.

Inputs:
  • input_x (Tensor) - flatten tensor to scatter. The shape of tensor is \((x_1)\).

  • send_count_matrix (Union[list[int], Tensor]) - The sending and receiving parameters of all ranks, \(\text{send_count_matrix}[i*\text{rank_size}+j]\) represents the amount of data sent by rank i to rank j, and the basic unit is the number of bytes of Tensor's dtype. Among them, rank_size indicates the size of the communication group.

Outputs:

Tensor. Flattened and concatenated tensor gather from remote ranks. If gather result is empty, it will return a Tensor with shape (), and value has no actual meaning.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend/GPU/CPU devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies.

Please see the msrun start up for more details.

This example should be run with 2 devices.

>>> from mindspore.ops import AlltoAllVC
>>> import mindspore.nn as nn
>>> from mindspore.communication import init, get_rank
>>> from mindspore import Tensor
>>>
>>> init()
>>> rank = get_rank()
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.all_to_all_v_c = AlltoAllVC()
...
...     def construct(self, x, send_count_matrix):
...         return self.all_to_all_v_c(x, send_count_matrix)
>>> send_count_matrix = Tensor([[0, 3], [3, 0]])
>>> send_tensor = Tensor([0, 1, 2.]) * rank
>>> net = Net()
>>> output = net(send_tensor, send_count_matrix)
>>> print(output)
rank 0:
[0. 1. 2]
rank 1:
[0. 0. 0]
Tutorial Examples: