mindflow.cell

init

class mindflow.cell.FCSequential(in_channels, out_channels, layers, neurons, residual=True, act='sin', weight_init='normal', has_bias=True, bias_init='default', weight_norm=False)[source]

A sequential container of the dense layers, dense layers are added to the container sequentially.

Parameters
  • in_channels (int) – The number of channels in the input space.

  • out_channels (int) – The number of channels in the output space.

  • layers (int) – The total number of layers, include input/hidden/output layers.

  • neurons (int) – The number of neurons of hidden layers.

  • residual (bool) – full-connected of residual block for the hidden layers. Default: True.

  • act (Union[str, Cell, Primitive, None]) – activate function applied to the output of the fully connected layer, eg. ‘ReLU’.Default: “sin”.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable weight_init parameter. The dtype is same as input x. The values of str refer to the function initializer. Default: ‘normal’.

  • has_bias (bool) – Specifies whether the layer uses a bias vector. Default: True.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable bias_init parameter. The dtype is same as input x. The values of str refer to the function initializer. Default: ‘default’.

  • weight_norm (bool) – Whether to compute the sum of squares of weight. Default: False.

Inputs:
  • input (Tensor) - Tensor of shape \((*, in\_channels)\).

Outputs:

Tensor of shape \((*, out\_channels)\).

Raises
Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindflow.cell import FCSequential
>>> from mindspore import Tensor
>>> inputs = np.ones((16, 3))
>>> inputs = Tensor(inputs.astype(np.float32))
>>> net = FCSequential(3, 3, 5, 32, weight_init="ones", bias_init="zeros")
>>> output = net(inputs).asnumpy()
>>> print(output.shape)
(16, 3)
class mindflow.cell.FNO1D(in_channels, out_channels, resolution, modes, channels=20, depths=4, mlp_ratio=4, compute_dtype=mstype.float32)[source]

The 1-dimensional Fourier Neural Operator (FNO1D) contains a lifting layer, multiple Fourier layers and a decoder layer. The details can be found in Fourier neural operator for parametric partial differential equations.

Parameters
  • in_channels (int) – The number of channels in the input space.

  • out_channels (int) – The number of channels in the output space.

  • resolution (int) – The spatial resolution of the input.

  • modes (int) – The number of low-frequency components to keep.

  • channels (int) – The number of channels after dimension lifting of the input. Default: 20.

  • depths (int) – The number of FNO layers. Default: 4.

  • mlp_ratio (int) – The number of channels lifting ratio of the decoder layer. Default: 4.

  • compute_dtype (dtype.Number) – The computation type of dense. Default mstype.float16. Should be mstype.float32 or mstype.float16. mstype.float32 is recommended for the GPU backend, mstype.float16 is recommended for the Ascend backend.

Inputs:
  • x (Tensor) - Tensor of shape \((batch\_size, resolution, input\_dims)\).

Outputs:

Tensor, the output of this FNO network.

  • output (Tensor) -Tensor of shape \((batch\_size, resolution, output\_dims)\).

Raises
Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore.common.initializer import initializer, Normal
>>> from mindflow.cell.neural_operators import FNO1D
>>> B, W, C = 32,1024,1
>>> input_ = initializer(Normal(), [B, W, C])
>>> net = FNO1D(in_channels=1, out_channels=1, resolution=64, modes=12)
>>> output = net(input_)
>>> print(output.shape)
(32, 1024, 1)
class mindflow.cell.FNO2D(in_channels, out_channels, resolution, modes, channels=20, depths=4, mlp_ratio=4, compute_dtype=mstype.float32)[source]

The 2-dimensional Fourier Neural Operator (FNO2D) contains a lifting layer, multiple Fourier layers and a decoder layer. The details can be found in Fourier neural operator for parametric partial differential equations.

Parameters
  • in_channels (int) – The number of channels in the input space.

  • out_channels (int) – The number of channels in the output space.

  • resolution (int) – The spatial resolution of the input.

  • modes (int) – The number of low-frequency components to keep.

  • channels (int) – The number of channels after dimension lifting of the input. Default: 20.

  • depths (int) – The number of FNO layers. Default: 4.

  • mlp_ratio (int) – The number of channels lifting ratio of the decoder layer. Default: 4.

  • compute_dtype (dtype.Number) – The computation type of dense. Default mstype.float16. Should be mstype.float16 or mstype.float32. mstype.float32 is recommended for the GPU backend, mstype.float16 is recommended for the Ascend backend.

Inputs:
  • x (Tensor) - Tensor of shape \((batch\_size, resolution, resolution, in\_channels)\).

Outputs:

Tensor, the output of this FNO network.

  • output (Tensor) -Tensor of shape \((batch\_size, resolution, resolution, out\_channels)\).

Raises
Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore.common.initializer import initializer, Normal
>>> from mindflow.cell.neural_operators import FNO2D
>>> B, H, W, C = 32, 64, 64, 1
>>> input = initializer(Normal(), [B, H, W, C])
>>> net = FNO2D(in_channels=1, out_channels=1, resolution=64, modes=12)
>>> output = net(input)
>>> print(output.shape)
(32, 64, 64, 1)
class mindflow.cell.InputScaleNet(input_scale, input_center=None)[source]

Scale the input value to specified region.

Parameters
  • input_scale (list) – The scale factor of input x/y/t.

  • input_center (Union[list, None]) – Center position of coordinate translation. Default: None.

Inputs:
  • input (Tensor) - Tensor of shape \((*, channels)\).

Outputs:

Tensor of shape \((*, channels)\).

Raises
  • TypeError – If input_scale is not a list.

  • TypeError – If input_center is not a list or None.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindflow.cell import InputScaleNet
>>> from mindspore import Tensor
>>> inputs = np.random.uniform(size=(16, 3)) + 3.0
>>> inputs = Tensor(inputs.astype(np.float32))
>>> input_scale = [1.0, 2.0, 4.0]
>>> input_center = [3.5, 3.5, 3.5]
>>> net = InputScaleNet(input_scale, input_center)
>>> output = net(inputs).asnumpy()
>>> assert np.all(output[:, 0] <= 0.5) and np.all(output[:, 0] >= -0.5)
>>> assert np.all(output[:, 1] <= 1.0) and np.all(output[:, 0] >= -1.0)
>>> assert np.all(output[:, 2] <= 2.0) and np.all(output[:, 0] >= -2.0)
class mindflow.cell.LinearBlock(in_channels, out_channels, weight_init='normal', bias_init='zeros', has_bias=True, activation=None)[source]

The LinearBlock. Applies a linear transformation to the incoming data.

Parameters
  • in_channels (int) – The number of channels in the input space.

  • out_channels (int) – The number of channels in the output space.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable weight_init parameter. The dtype is same as input input . For the values of str, refer to the function initializer. Default: “normal”.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable bias_init parameter. The dtype is same as input input . The values of str refer to the function initializer. Default: “zeros”.

  • has_bias (bool) – Specifies whether the layer uses a bias vector. Default: True.

  • activation (Union[str, Cell, Primitive, None]) – activate function applied to the output of the fully connected layer. Default: None.

Inputs:
  • input (Tensor) - Tensor of shape \((*, in\_channels)\).

Outputs:

Tensor of shape \((*, out\_channels)\).

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindelec.architecture import LinearBlock
>>> from mindspore import Tensor
>>> input = Tensor(np.array([[180, 234, 154], [244, 48, 247]], np.float32))
>>> net = LinearBlock(3, 4)
>>> output = net(input)
>>> print(output.shape)
(2, 4)
class mindflow.cell.MultiScaleFCCell(in_channels, out_channels, layers, neurons, residual=True, act='sin', weight_init='normal', weight_norm=False, has_bias=True, bias_init='default', num_scales=4, amp_factor=1.0, scale_factor=2.0, input_scale=None, input_center=None, latent_vector=None)[source]

The multi-scale network.

Parameters
  • in_channels (int) – The number of channels in the input space.

  • out_channels (int) – The number of channels in the output space.

  • layers (int) – The total number of layers, include input/hidden/output layers.

  • neurons (int) – The number of neurons of hidden layers.

  • residual (bool) – full-connected of residual block for the hidden layers. Default: True.

  • act (Union[str, Cell, Primitive, None]) – activate function applied to the output of the fully connected layer, eg. ‘ReLU’.Default: “sin”.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable weight_init parameter. The dtype is same as input x. The values of str refer to the function initializer. Default: ‘normal’.

  • weight_norm (bool) – Whether to compute the sum of squares of weight. Default: False.

  • has_bias (bool) – Specifies whether the layer uses a bias vector. Default: True.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable bias_init parameter. The dtype is same as input x. The values of str refer to the function initializer. Default: ‘default’.

  • num_scales (int) – The subnet number of multi-scale network. Default: 4

  • amp_factor (Union[int, float]) – The amplification factor of input. Default: 1.0

  • scale_factor (Union[int, float]) – The base scale factor. Default: 2.0

  • input_scale (Union[list, None]) – The scale factor of input x/y/t. If not None, the inputs will be scaled before set in the network. Default: None.

  • input_center (Union[list, None]) – Center position of coordinate translation. If not None, the inputs will be translated before set in the network. Default: None.

  • latent_vector (Union[Parameter, None]) – Trainable papameter which will be concated will the sampling inputs and updated during training. Default: None.

Inputs:
  • input (Tensor) - Tensor of shape \((*, in\_channels)\).

Outputs:

Tensor of shape \((*, out\_channels)\).

Raises
  • TypeError – If num_scales is not an int.

  • TypeError – If amp_factor is neither int nor float.

  • TypeError – If scale_factor is neither int nor float.

  • TypeError – If latent_vector is neither a Parameter nor None.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindflow.cell import MultiScaleFCCell
>>> from mindspore import Tensor, Parameter
>>> inputs = np.ones((64,3)) + 3.0
>>> inputs = Tensor(inputs.astype(np.float32))
>>> num_scenarios = 4
>>> latent_size = 16
>>> latent_init = np.ones((num_scenarios, latent_size)).astype(np.float32)
>>> latent_vector = Parameter(Tensor(latent_init), requires_grad=True)
>>> input_scale = [1.0, 2.0, 4.0]
>>> input_center = [3.5, 3.5, 3.5]
>>> net = MultiScaleFCCell(3, 3, 5, 32,
...                        weight_init="ones", bias_init="zeros",
...                        input_scale=input_scale, input_center=input_center, latent_vector=latent_vector)
>>> output = net(inputs).asnumpy()
>>> print(output.shape)
(64, 3)
class mindflow.cell.PDENet(height, width, channels, kernel_size, max_order, step, dx=0.01, dy=0.01, dt=0.01, periodic=True, enable_moment=True, if_fronzen=False)[source]

The PDE-Net model.

PDE-Net is a feed-forward deep network to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models. The basic idea is to learn differential operators by learning convolution kernels (filters), and apply neural networks or other machine learning methods to approximate the unknown nonlinear responses. A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network. These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters (an important concept originated from wavelet theory).

For more details, please refers to the paper PDE-Net: Learning PDEs from Data.

Parameters
  • height (int) – The height number of the input and output tensor of the PDE-Net.

  • width (int) – The width number of the input and output tensor of the PDE-Net.

  • channels (int) – The channel number of the input and output tensor of the PDE-Net.

  • kernel_size (int) – Specifies the height and width of the 2D convolution kernel.

  • max_order (int) – The max order of the PDE models.

  • step (int) – The number of the delta-T blocks used in PDE-Net.

  • dx (float) – The spatial resolution of x dimension. Default: 0.01.

  • dy (float) – The spatial resolution of y dimension. Default: 0.01.

  • dt (float) – The time step of the PDE-Net. Default: 0.01.

  • periodic (bool) – Specifies whether periodic pad is used with convolution kernels. Default: True.

  • enable_moment (bool) – Specifies whether the convolution kernels are constrained by moments. Default: True.

  • if_fronzen (bool) – Specifies whether the moment is frozen. Default: False.

Inputs:
  • input (Tensor) - Tensor of shape \((batch\_size, channels, height, width)\).

Outputs:

Tensor, has the same shape as input with data type of float32.

Raises
  • TypeError – If height, width, channels, kernel_size, max_order or step is not an int.

  • TypeError – If periodic, enable_moment, if_fronzen is not a bool.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> import mindspore.common.dtype as mstype
>>> from mindflow.cell.neural_operators import PDENet
>>> input = Tensor(np.random.rand(1, 2, 16, 16), mstype.float32)
>>> net = PDENet(16, 16, 2, 5, 3, 2)
>>> output = net(input)
>>> print(output.shape)
(1, 2, 16, 16)
class mindflow.cell.ResBlock(in_channels, out_channels, weight_init='normal', bias_init='zeros', has_bias=True, activation=None, weight_norm=False)[source]

The ResBlock of dense layer.

Parameters
  • in_channels (int) – The number of channels in the input space.

  • out_channels (int) – The number of channels in the output space.

  • weight_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable weight_init parameter. The dtype is same as input x. The values of str refer to the function initializer. Default: ‘normal’.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number]) – The trainable bias_init parameter. The dtype is same as input x. The values of str refer to the function initializer. Default: ‘zeros’.

  • has_bias (bool) – Specifies whether the layer uses a bias vector. Default: True.

  • activation (Union[str, Cell, Primitive, None]) – activate function applied to the output of the dense layer. Default: None.

  • weight_norm (bool) – Whether to compute the sum of squares of weight. Default: False.

Inputs:
  • input (Tensor) - Tensor of shape \((*, in\_channels)\).

Outputs:

Tensor of shape \((*, out\_channels)\).

Raises
  • ValueError – If in_channels not equal out_channels.

  • TypeError – If activation is not in str or Cell or Primitive.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindflow.cell import ResBlock
>>> from mindspore import Tensor
>>> input = Tensor(np.array([[180, 234, 154], [244, 48, 247]], np.float32))
>>> net = ResBlock(3, 3)
>>> output = net(input)
>>> print(output.shape)
(2, 3)
class mindflow.cell.ViT(image_size=(192, 384), in_channels=7, out_channels=3, patch_size=16, encoder_depths=12, encoder_embed_dim=768, encoder_num_heads=12, decoder_depths=8, decoder_embed_dim=512, decoder_num_heads=16, mlp_ratio=4, dropout_rate=1.0, compute_dtype=mstype.float16)[source]

This module based on ViT backbone which including encoder, decoding_embedding, decoder and dense layer.

Parameters
  • image_size (tuple[int]) – The image size of input. Default: (192, 384).

  • in_channels (int) – The input feature size of input. Default: 7.

  • out_channels (int) – The output feature size of output. Default: 3.

  • patch_size (int) – The patch size of image. Default: 16.

  • encoder_depths (int) – The encoder depth of encoder layer. Default: 12.

  • encoder_embed_dim (int) – The encoder embedding dimension of encoder layer. Default: 768.

  • encoder_num_heads (int) – The encoder heads’ number of encoder layer. Default: 12.

  • decoder_depths (int) – The decoder depth of decoder layer. Default: 8.

  • decoder_embed_dim (int) – The decoder embedding dimension of decoder layer. Default: 512.

  • decoder_num_heads (int) – The decoder heads’ number of decoder layer. Default: 16.

  • mlp_ratio (int) – The rate of mlp layer. Default: 4.

  • dropout_rate (float) – The rate of dropout layer. Default: 1.0.

  • compute_dtype (dtype) – The data type for encoder, decoding_embedding, decoder and dense layer. Default: mstype.float16.

Inputs:
  • input (Tensor) - Tensor of shape \((batch\_size, feature\_size, image\_height, image\_width)\).

Outputs:
  • output (Tensor) - Tensor of shape \((batch\_size, patchify\_size, embed\_dim)\). where patchify_size = (image_height * image_width) / (patch_size * patch_size)

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import context
>>> from mindspore import dtype as mstype
>>> from mindflow.cell import ViT
>>> input_tensor = Tensor(np.ones((32, 3, 192, 384)), mstype.float32)
>>> print(input_tensor.shape)
(32, 3, 192, 384)
>>> model = ViT(in_channels=3,
>>>             out_channels=3,
>>>             encoder_depths=6,
>>>             encoder_embed_dim=768,
>>>             encoder_num_heads=12,
>>>             decoder_depths=6,
>>>             decoder_embed_dim=512,
>>>             decoder_num_heads=16,
>>>             )
>>> output_tensor = model(input_tensor)
>>> print(output_tensor.shape)
(32, 288, 768)
mindflow.cell.get_activation(name)[source]

Gets the activation function.

Parameters

name (Union[str, None]) – The name of the activation function. If name was None, it would return None.

Returns

Function, the activation function.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindflow.cell import get_activation
>>> from mindspore import Tensor
>>> input_x = Tensor(np.array([[1.2, 0.1], [0.2, 3.2]], dtype=np.float32))
>>> sigmoid = get_activation('sigmoid')
>>> output = sigmoid(input_x)
>>> print(output)
[[0.7685248  0.5249792 ]
 [0.54983395 0.96083426]]