mindchemistry.cell.AutoEncoder

View Source On Gitee
class mindchemistry.cell.AutoEncoder(channels, weight_init='normal', has_bias=True, bias_init='zeros', has_dropout=False, dropout_rate=0.5, has_layernorm=False, layernorm_epsilon=1e-7, has_activation=True, act='relu', out_act=None)[source]

The AutoEncoder Network. Applies an encoder to get the latent code and applies a decoder to get the reconstruct data.

Parameters
  • channels (list) – The number of channels of each encoder and decoder layer.

  • weight_init (Union[str, float, mindspore.common.initializer, List]) – initialize layer parameters. If weight_init was List, each element corresponds to each layer. Default: 'normal' .

  • has_bias (Union[bool, List]) – The switch for whether the dense layers has bias. If has_bias was List, each element corresponds to each dense layer. Default: True .

  • bias_init (Union[str, float, mindspore.common.initializer, List]) – initialize layer parameters. If bias_init was List, each element corresponds to each dense layer. Default: 'zeros' .

  • has_dropout (Union[bool, List]) – The switch for whether linear block has a dropout layer. If has_dropout was List, each element corresponds to each layer. Default: False .

  • dropout_rate (float) – The dropout rate for dropout layer, the dropout rate must be a float in range (0, 1] If dropout_rate was List, each element corresponds to each dropout layer. Default: 0.5 .

  • has_layernorm (Union[bool, List]) – The switch for whether linear block has a layer normalization layer. If has_layernorm was List, each element corresponds to each layer. Default: False .

  • layernorm_epsilon (float) – The hyper parameter epsilon for layer normalization layer. If layernorm_epsilon was List, each element corresponds to each layer normalization layer. Default: 1e-7 .

  • has_activation (Union[bool, List]) – The switch for whether linear block has an activation layer. If has_activation was List, each element corresponds to each layer. Default: True .

  • act (Union[str, None, List]) – The activation function in linear block. If act was List, each element corresponds to each activation layer. Default: 'relu' .

  • out_act (Union[None, str, mindspore.nn.Cell]) – The activation function to output layer. Default: None .

Inputs:
  • x (Tensor) - The shape of Tensor is \((*, channels[0])\).

Outputs:
  • latents (Tensor) - The shape of Tensor is \((*, channels[-1])\).

  • x_recon (Tensor) - The shape of Tensor is \((*, channels[0])\).

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> from mindchemistry import AutoEncoder
>>> from mindspore import Tensor
>>> inputs = Tensor(np.array([[180, 234, 154], [244, 48, 247]], np.float32))
>>> net = AutoEncoder([3, 6, 2])
>>> output = net(inputs)
>>> print(output[0].shape, output[1].shape)
(2, 2) (2, 3)