mindflow.cell.SNO
- class mindflow.cell.SNO(in_channels, out_channels, hidden_channels=64, num_sno_layers=3, data_format='channels_first', transforms=None, kernel_size=5, num_usno_layers=0, num_unet_strides=1, activation='gelu', compute_dtype=mstype.float32)[source]
The Spectral Neural Operator (SNO) base class, which contains a lifting layer (encoder), multiple spectral transform layers (linear transforms in spectral space) and a projection layer (decoder). This is a FNO-like architecture using polynomial transform (Chebyshev, Legendre, etc.) instead of Fourier transform. The details can be found in Spectral Neural Operators.
- Parameters
in_channels (int) – The number of channels in the input space.
out_channels (int) – The number of channels in the output space.
hidden_channels (int) – The number of channels of the SNO layers input and output. Default:
64
.num_sno_layers (int) – The number of spectral layers. Default:
3
.data_format (str) – The input data channel sequence. Default:
channels_first
.transforms (list(list(mindspore.Tensor))) – The list of direct and inverse polynomial transforms on x, y and z axis, respectively. The list has the following structure: [[transform_x, inv_transform_x], [transform_z, inv_transform_z]]. The shape of transformation matrix should be (n_modes, resolution), where n_modes is the number of polynomial transform modes, resolution is spatial resolution of input in the corresponding direction. The shape of inverse transformation is (resolution, n_modes). Default:
None
.kernel_size (int) – Specifies the height and width of the convolution kernel in SNO layers. Default:
5
.num_usno_layers (int) – The number of spectral layers with UNet skip blocks. Default:
0
.num_unet_strides (int) – The number of convolutional downsample blocks in UNet skip blocks. Default:
1
.activation (Union[str, class]) – The activation function, could be either str or class. Default:
gelu
.compute_dtype (dtype.Number) – The computation type. Default:
mstype.float32
. Should bemstype.float32
ormstype.float16
. mstype.float32 is recommended for the GPU backend, mstype.float16 is recommended for the Ascend backend.
- Inputs:
x (Tensor) - Tensor with shape \((batch\_size, in_channels, resolution)\).
- Outputs:
Tensor with shape \((batch\_size, out_channels, resolution)\).
- Raises
TypeError – If in_channels is not an int.
TypeError – If out_channels is not an int.
TypeError – If hidden_channels is not an int.
TypeError – If num_sno_layers is not an int.
TypeError – If transforms is not a list.
ValueError – If len(transforms) is not in (1, 2, 3).
TypeError – If num_usno_layers is not an int.
- Supported Platforms:
Ascend
GPU
Examples
>>> import numpy as np >>> from mindspore import Tensor >>> import mindspore.common.dtype as mstype >>> from mindflow.cell.neural_operators.sno import SNO >>> resolution, modes = 100, 12 >>> matr = Tensor(np.random.rand(modes, resolution), mstype.float32) >>> inv_matr = Tensor(np.random.rand(resolution, modes), mstype.float32) >>> net = SNO(dim=2, in_channels=2, out_channels=5, transforms=[ [matr, inv_matr] * 2]) >>> x = Tensor(np.random.rand(19, 2, resolution, resolution), mstype.float32) >>> y = net(x) >>> print(x.shape, y.shape) (19, 2, 100, 100) (19, 5, 100, 100)