mindchemistry.e3.nn.NormActivation
- class mindchemistry.e3.nn.NormActivation(irreps_in, act, normalize=True, epsilon=None, bias=False, init_method='zeros', dtype=float32, ncon_dtype=float32)[source]
- Activation function for the norm of irreps. Applies a scalar activation to the norm of each irrep and outputs a (normalized) version of that irrep multiplied by the scalar output of the scalar activation. - Parameters
- act (Func) – an activation function for each part of the norm of irreps_in. 
- normalize (bool) – whether to normalize the input features before multiplying them by the scalars from the nonlinearity. Default: True. 
- epsilon (float) – when - normalize, norms smaller than- epsilonwill be clamped up to- epsilonto avoid division by zero. Not allowed when normalize is False. Default: None.
- bias (bool) – whether to apply a learnable additive bias to the inputs of the act. Default: False. 
- init_method (Union[str, float, mindspore.common.initializer]) – initialize parameters. Default: - 'normal'.
- dtype (mindspore.dtype) – The type of input tensor. Default: - mindspore.float32.
- ncon_dtype (mindspore.dtype) – The type of input tensors of ncon computation module. Default: - mindspore.float32.
 
 - Inputs:
- input (Tensor) - The shape of Tensor is \((..., irreps\_in.dim)\). 
 
- Outputs:
- output (Tensor) - The shape of Tensor is \((..., irreps\_in.dim)\). 
 
 - Raises
- ValueError – If epsilon is not None and normalize is False. 
- ValueError – If epsilon is not positive. 
 
 - Supported Platforms:
- Ascend
 - Examples - >>> from mindchemistry.e3.nn import NormActivation >>> from mindspore import ops, Tensor >>> set_context(device_id=6) >>> norm_activation = NormActivation("2x1e", ops.sigmoid, bias=True) >>> print(norm_activation) NormActivation [sigmoid] (2x1e -> 2x1e) >>> inputs = Tensor(ops.ones((4, 6))) >>> outputs = norm_activation(inputs) >>> print(outputs.shape) (4, 6)