mindspore.mint.distributed.all_gather_into_tensor_uneven

View Source On Gitee
mindspore.mint.distributed.all_gather_into_tensor_uneven(output, input, output_split_sizes=None, group=None, async_op=False)[source]

Gathers and concatenates tensors across devices with uneven first dimensions.

Note

  • Input tensors must have identical shapes except for the first dimension.

  • Output tensor's first dimension should equal to the sum of all devices' input first dimensions.

Parameters
  • output (Tensor) – Concatenated output tensor with shape \((\sum_{i=0}^{N-1} x_{i1}, x_2, ..., x_R)\), where N is the number of devices in the group.

  • input (Tensor) – Local input tensor with shape \((x_{k1}, x_2, ..., x_R)\), where k is current device's rank.

  • output_split_sizes (list[int], optional) – Specifies first dimension sizes from each device. Must match actual input dimensions when provided. If None, assumes equal split sizes across devices. Default: None.

  • group (str, optional) – The communication group to work on. If None, which means "hccl_world_group" in Ascend. Default: None.

  • async_op (bool, optional) – Whether this operator should be an async operator. Default: False.

Returns

CommHandle, CommHandle is an async work handle, if async_op is set to True. CommHandle will be None, when async_op is False.

Raises
  • ValueError – If the shape of input does not match the constraints of output_split_sizes.

  • RuntimeError – If device target is invalid, or backend is invalid, or distributed initialization fails.

Supported Platforms:

Ascend

Examples

Note

Before running the following examples, you need to configure the communication environment variables.

For Ascend devices, it is recommended to use the msrun startup method without any third-party or configuration file dependencies. Please see the msrun start up for more details.

This example should be run with 2 devices.

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import mint
>>> from mindspore.mint.distributed import init_process_group, get_rank
>>> from mindspore.mint.distributed import all_gather_into_tensor_uneven
>>> from mindspore import Tensor
>>>
>>> ms.set_device(device_target="Ascend")
>>> init_process_group()
>>> if get_rank() == 0:
>>>     input_tensor = Tensor(np.ones([3, 4]).astype(np.float32))
>>> else:
>>>     input_tensor = Tensor(np.ones([2, 4]).astype(np.float32))
>>> out_tensor = Tensor(np.zeros([5, 4]).astype(np.float32))
>>> output_split_sizes = [3, 2]
>>> output = all_gather_into_tensor_uneven(out_tensor, input_tensor, output_split_sizes)
>>> print(out_tensor)
[[1. 1. 1. 1.]
 [1. 1. 1. 1.]
 [1. 1. 1. 1.]
 [1. 1. 1. 1.]
 [1. 1. 1. 1.]]