mindspore.ops.pixel_unshuffle

View Source On Gitee
mindspore.ops.pixel_unshuffle(input, downscale_factor)[source]

Applies the PixelUnshuffle operation over input input which is the inverse of PixelShuffle. For more details, refer to Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network .

Typically, the input is of shape \((*, C, H \times r, W \times r)\) , and the output is of shape \((*, C \times r^2, H, W)\) , where r is a downscale factor and * is zero or more batch dimensions.

Parameters
  • input (Tensor) – Tensor of shape \((*, C, H \times r, W \times r)\) . The dimension of input is larger than 2, and the length of second to last dimension or last dimension can be divisible by downscale_factor .

  • downscale_factor (int) – factor to unshuffle the input Tensor, and is a positive integer. downscale_factor is the above-mentioned \(r\).

Returns

  • output (Tensor) - Tensor of shape \((*, C \times r^2, H, W)\) .

Raises
  • ValueError – If downscale_factor is not a positive integer.

  • ValueError – If the length of second to last dimension or last dimension is not divisible by downscale_factor .

  • ValueError – If the dimension of input is less than 3.

  • TypeError – If input is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> input_x = np.arange(8 * 8).reshape((1, 1, 8, 8))
>>> input_x = mindspore.Tensor(input_x, mindspore.dtype.int32)
>>> output = ops.pixel_unshuffle(input_x, 2)
>>> print(output.shape)
(1, 4, 4, 4)