mindscience.models.layers.MultiScaleFCSequential

class mindscience.models.layers.MultiScaleFCSequential(in_channels, out_channels, layers, neurons, residual=True, act='sin', weight_init='normal', weight_norm=False, has_bias=True, bias_init='default', num_scales=4, amp_factor=1.0, scale_factor=2.0, input_scale=None, input_center=None, latent_vector=None)[source]

The multi-scale fully conneted network.

Parameters
  • in_channels (int) – The number of channels in the input space.

  • out_channels (int) – The number of channels in the output space.

  • layers (int) – The total number of layers, include input/hidden/output layers.

  • neurons (int) – The number of neurons of hidden layers.

  • residual (bool, optional) – full-connected of residual block for the hidden layers. Default: True.

  • act (Union[str, Cell, Primitive, None], optional) – activate function, applied to the output of the fully connected layer, eg. 'ReLU'.Default: "sin".

  • weight_init (Union[Tensor, str, Initializer, numbers.Number], optional) – The trainable weight_init parameter. The dtype is same as input. The values of str refer to the function initializer. Default: 'normal'.

  • weight_norm (bool, optional) – Whether to compute the sum of squares of weight. Default: False.

  • has_bias (bool, optional) – Specifies whether the layer uses a bias vector. Default: True.

  • bias_init (Union[Tensor, str, Initializer, numbers.Number], optional) – The trainable bias_init parameter. The dtype is same as input. The values of str refer to the function initializer. Default: 'default'.

  • num_scales (int, optional) – The subnet number of multi-scale network. Default: 4.

  • amp_factor (Union[int, float], optional) – The amplification factor of input. Default: 1.0.

  • scale_factor (Union[int, float], optional) – The base scale factor. Default: 2.0.

  • input_scale (Union[list, None], optional) – The scale factor of input x/y/t. If not None, the inputs will be scaled before set in the network. Default: None.

  • input_center (Union[list, None], optional) – Center position of coordinate translation. If not None, the inputs will be translated before set in the network. Default: None.

  • latent_vector (Union[Parameter, None], optional) – Trainable papameter which will be concated with the sampling inputs and updated during training. Default: None.

Inputs:
  • input (Tensor) - Tensor of shape \((*, in\_channels)\).

Outputs:
  • output (Tensor) - Tensor of shape \((*, out\_channels)\).

Raises
  • TypeError – If num_scales is not an int.

  • TypeError – If amp_factor is neither int nor float.

  • TypeError – If scale_factor is neither int nor float.

  • TypeError – If latent_vector is neither a Parameter nor None.

Examples

>>> import numpy as np
>>> from mindscience.models.layers import MultiScaleFCSequential
>>> from mindspore import Tensor, Parameter
>>> inputs = np.ones((64,3)) + 3.0
>>> inputs = Tensor(inputs.astype(np.float32))
>>> num_scenarios = 4
>>> latent_size = 16
>>> latent_init = np.ones((num_scenarios, latent_size)).astype(np.float32)
>>> latent_vector = Parameter(Tensor(latent_init), requires_grad=True)
>>> input_scale = [1.0, 2.0, 4.0]
>>> input_center = [3.5, 3.5, 3.5]
>>> net = MultiScaleFCSequential(3, 3, 5, 32,
...                        weight_init="ones", bias_init="zeros",
...                        input_scale=input_scale, input_center=input_center, latent_vector=latent_vector)
>>> output = net(inputs).asnumpy()
>>> print(output.shape)
(64, 3)