Function Differences with torch.nn.functional.fold

View Source On Gitee

torch.nn.functional.fold

torch.nn.functional.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1)

For more information, see torch.nn.functional.fold.

mindspore.ops.fold

mindspore.ops.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1)

For more information, see mindspore.ops.fold.

Differences

PyTorch:Combines an array of sliding local blocks into a large containing tensor.

MindSpore:MindSpore API implements basically the same function as PyTorch.

Categories

Subcategories

PyTorch

MindSpore

Difference

Parameter

Parameter1

input

input

Pytorch: shape is :math:(N, C \times \prod(\text{kernel_size}), L) , MindSpore: shape is :math:(N, C, \prod(\text{kernel_size}), L) .

Parameter2

output_size

output_size

Pytorch: int or tuple, MindSpore: 1D tensor with 2 elements of data type int.

Parameter3

kernel_size

kernel_size

-

Parameter4

dilation

dilation

-

Parameter5

padding

padding

-

Parameter6

stride

stride

-

Code Example 1

The two APIs achieve the same function and have the same usage.

# PyTorch
import torch
import numpy as np
x = np.random.randn(1, 3 * 2 * 2, 12)
input = torch.tensor(x, dtype=torch.float32)
output = torch.nn.functional.fold(input, output_size=(4, 5), kernel_size=(2, 2))
print(output.detach().shape)
# [1, 3, 4, 5]

# MindSpore
import mindspore
import numpy as np
x = np.random.randn(1, 3, 4, 12)
input = mindspore.Tensor(x, mindspore.float32)
output_size = mindspore.Tensor((4, 5), mindspore.int32)
output = mindspore.ops.fold(input, output_size, kernel_size=(2, 2))
print(output)
# (1, 3, 4, 5)