mindspore.mint.nn.functional.conv1d

mindspore.mint.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) Tensor[source]

Applies a 1D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, L_{in})\), where \(N\) is batch size, \(C\) is channel number, \(L\) is sequence length.

The output is calculated based on formula:

\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]

where \(bias\) is the output channel bias, \(ccor\) is the cross-correlation, \(weight\) is the convolution kernel value and \(X\) represents the input feature map.

  • \(i\) corresponds to the batch number, the range is \([0, N-1]\), where \(N\) is the batch size of the input.

  • \(j\) corresponds to the output channel, the range is \([0, C_{out}-1]\), where \(C_{out}\) is the number of output channels, which is also equal to the number of kernels.

  • \(k\) corresponds to the input channel, the range is \([0, C_{in}-1]\), where \(C_{in}\) is the number of input channels, which is also equal to the number of channels in the convolutional kernels.

Therefore, in the above formula, \({bias}(C_{\text{out}_j})\) represents the bias of the \(j\)-th output channel, \({weight}(C_{\text{out}_j}, k)\) represents the slice of the \(j\)-th convolutional kernel in the \(k\)-th channel, and \({X}(N_i, k)\) represents the slice of the \(k\)-th input channel in the \(i\)-th batch of the input feature map.

The shape of the convolutional kernel is given by \((\text{kernel_size})\), where \(\text{kernel_size}\) is the length of the kernel. If we consider the input and output channels as well as the groups parameter, the complete kernel shape will be \((C_{out}, C_{in} / \text{groups}, \text{kernel_size})\), where groups is the number of groups dividing input's input channel when applying groups convolution.

For more details about convolution layers, please refer to Gradient Based Learning Applied to Document Recognition.

Parameters
  • input (Tensor) – Tensor of shape \((N, C_{in}, L_{in})\) or \((C_{in}, L_{in})\).

  • weight (Tensor) – Tensor of shape \((C_{out}, C_{in} / \text{groups}, \text{kernel_size})\), then the size of kernel is \((\text{kernel_size})\).

  • bias (Tensor, optional) – Bias Tensor with shape \((C_{out})\). When bias is None , zeros will be used. Default: None .

  • stride (Union[int, tuple[int], list[int]], optional) – The movement stride of the 1D convolution kernel. The data type is an integer or a tuple/list of one integer. Default: 1 .

  • padding (Union[int, tuple[int], list[int], str], optional) –

    The number of padding on the input. The data type is an integer or a tuple/list of one integer or string {"valid", "same"}. The value should be greater than or equal to 0. Default: 0 .

    • "same": Pads the input around its edges so that the shape of input and output are the same when stride is set to 1. The amount of padding to is calculated by the operator internally. If the amount is even, it is uniformly distributed around the input; if it is odd, the excess amount goes to the right side. If this mode is set, stride must be 1.

    • "valid": No padding is applied to the input, and the output returns the maximum possible length. Extra sequence that could not complete a full stride will be discarded.

  • dilation (Union[int, tuple[int], list[int]], optional) – Specifies the dilation rate to use for dilated convolution. It can be a single int or a tuple/list of 1 integer. Assuming \(dilation=(d)\), the convolutional kernel samples the input with a spacing of \(d-1\) elements in the length direction. Default: 1 .

  • groups (int, optional) –

    Splits filter into groups. Default: 1 . The following restraints should be met:

    • \((C_{in} \text{ % } \text{groups} == 0)\)

    • \((C_{out} \text{ % } \text{groups} == 0)\)

    • \((C_{out} >= \text{groups})\)

    • \((\text{weight[1]} = C_{in} / \text{groups})\)

Returns

Tensor, the value that applied 1D convolution. The shape is \((N, C_{out}, L_{out})\). To see how different pad modes affect the output shape, please refer to mindspore.mint.nn.Conv1d for more details.

Raises
  • RuntimeError – On Ascend, due to the limitation of the L1 cache size of different NPU chip, if input size or kernel size is too large, it may trigger an error.

  • TypeError – If stride , padding or dilation is neither an int nor a tuple nor a list.

  • ValueError – Args and size of the input feature map should satisfy the output formula to ensure that the size of the output feature map is positive; otherwise, an error will be reported.

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If padding is less than 0.

  • ValueError – If padding is "same" and stride is not 1.

  • ValueError – If the parameters do not satisfy the convolution output formula.

  • ValueError – If the kernel size is larger than the input sequence length (after padding).

  • ValueError – If padding would make the effective convolution region exceed the input size.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, mint, mint
>>> x = Tensor(np.ones([10, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([32, 32, 3]), mindspore.float32)
>>> output = mint.nn.functional.conv1d(x, weight)
>>> print(output.shape)
(10, 32, 30)