The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

For more details, please refer to mindspore.nn.Adam.

The updating formulas are as follows,

$\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ w = w - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}$

$$m$$ represents the 1st moment vector, $$v$$ represents the 2nd moment vector, $$g$$ represents gradient, $$l$$ represents scaling factor lr, $$\beta_1, \beta_2$$ represent beta1 and beta2, $$t$$ represents updating step while $$beta_1^t(\beta_1^{t})$$ and $$beta_2^t(\beta_2^{t})$$ represent beta1_power and beta2_power, $$\alpha$$ represents learning_rate, $$w$$ represents var, $$\epsilon$$ represents epsilon.

Parameters
• use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

• use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
• var (Tensor) - Weights to be updated. The shape is $$(N, *)$$ where $$*$$ means, any number of additional dimensions. The data type can be float16 or float32.

• m (Tensor) - The 1st moment vector in the updating formula, the shape and data type value should be the same as var.

• v (Tensor) - the 2nd moment vector in the updating formula, the shape and data type value should be the same as var. Mean square gradients with the same type as var.

• beta1_power (float) - $$beta_1^t(\beta_1^{t})$$ in the updating formula, the data type value should be the same as var.

• beta2_power (float) - $$beta_2^t(\beta_2^{t})$$ in the updating formula, the data type value should be the same as var.

• lr (float) - $$l$$ in the updating formula. The paper suggested value is $$10^{-8}$$, the data type value should be the same as var.

• beta1 (float) - The exponential decay rate for the 1st moment estimations, the data type value should be the same as var. The paper suggested value is $$0.9$$

• beta2 (float) - The exponential decay rate for the 2nd moment estimations, the data type value should be the same as var. The paper suggested value is $$0.999$$

• epsilon (float) - Term added to the denominator to improve numerical stability.

• gradient (Tensor) - Gradient, has the same shape and data type as var.

Outputs:

Tuple of 3 Tensor, the updated parameters.

• var (Tensor) - The same shape and data type as Inputs var.

• m (Tensor) - The same shape and data type as Inputs m.

• v (Tensor) - The same shape and data type as Inputs v.

Raises
• TypeError – If neither use_locking nor use_nesterov is a bool.

• TypeError – If var, m or v is not a Tensor.

• TypeError – If beta1_power, beta2_power1, lr, beta1, beta2, epsilon or gradient is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.var = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="var")
...         self.m = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="m")
...         self.v = Parameter(Tensor(np.ones([2, 2]).astype(np.float32)), name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
...         out = self.apply_adam(self.var, self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2,