mindspore.ops.ParallelConcat

class mindspore.ops.ParallelConcat[source]

Concats input tensors along the first dimension.

The difference between Concat and ParallelConcat is that Concat requires all of the inputs be computed before the operation will begin but doesn’t require that the input shapes be known during graph construction. Parallel concat will copy pieces of the input into the output as they become available, in some situations this can provide a performance benefit.

Note

The input tensors are all required to have size 1 in the first dimension.

Inputs:
  • values (tuple, list) - A tuple or a list of input tensors. The data type and shape of these tensors must be the same and their rank should not be less than 1. The supported date type is Number on CPU, the same for Ascend except [float64, complex64, complex128].

Outputs:

Tensor, data type is the same as values.

Raises
  • TypeError – If any type of the inputs is not a Tensor.

  • TypeError – If the data type of these tensors are not the same.

  • ValueError – If any tensor.shape[0] is not 1.

  • ValueError – If rank of any Tensor in values is less than 1.

  • ValueError – If the shape of these tensors are not the same.

Supported Platforms:

Ascend GPU CPU

Examples

>>> data1 = Tensor(np.array([[0, 1]]).astype(np.int32))
>>> data2 = Tensor(np.array([[2, 1]]).astype(np.int32))
>>> op = ops.ParallelConcat()
>>> output = op((data1, data2))
>>> print(output)
[[0 1]
 [2 1]]