class mindspore.ops.AdamNoUpdateParam(*args, **kwargs)[source]

The Adam algorithm is proposed in Adam: A Method for Stochastic Optimization.

The updating formulas are as follows,

$\begin{split}\begin{array}{ll} \\ m = \beta_1 * m + (1 - \beta_1) * g \\ v = \beta_2 * v + (1 - \beta_2) * g * g \\ l = \alpha * \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \\ \Delta{w} = - l * \frac{m}{\sqrt{v} + \epsilon} \end{array}\end{split}$

$$m$$ represents the 1st moment vector, $$v$$ represents the 2nd moment vector, $$g$$ represents gradient, $$l$$ represents scaling factor lr, $$\beta_1, \beta_2$$ represent beta1 and beta2, $$t$$ represents updating step while $$beta_1^t$$ and $$beta_2^t$$ represent beta1_power and beta2_power, $$\alpha$$ represents learning_rate, $$w$$ represents the parameter to be updated, $$\epsilon$$.

Parameters
• use_locking (bool) – Whether to enable a lock to protect variable tensors from being updated. If true, updates of the var, m, and v tensors will be protected by a lock. If false, the result is unpredictable. Default: False.

• use_nesterov (bool) – Whether to use Nesterov Accelerated Gradient (NAG) algorithm to update the gradients. If true, update the gradients using NAG. If false, update the gradients without using NAG. Default: False.

Inputs:
• m (Tensor) - The 1st moment vector in the updating formula. The data type must be float32.

• v (Tensor) - the 2nd moment vector in the updating formula. The shape must be the same as m. The data type must be float32.

• beta1_power (Tensor) - $$beta_1^t$$ in the updating formula. The data type must be float32.

• beta2_power (Tensor) - $$beta_2^t$$ in the updating formula. The data type must be float32.

• lr (Tensor) - $$l$$ in the updating formula. The data type must be float32.

• beta1 (Tensor) - The exponential decay rate for the 1st moment estimations. The data type must be float32.

• beta2 (Tensor) - The exponential decay rate for the 2nd moment estimations. The data type must be float32.

• epsilon (Tensor) - Term added to the denominator to improve numerical stability. The data type must be float32.

• gradient (Tensor) - Gradient, the shape must be the same as m, the data type must be float32.

Outputs:

Tensor, whose shape and data type are the same with gradient, is a value that should be added to the parameter to be updated.

Raises
• TypeError – If neither use_locking nor use_nesterov is a bool.

• TypeError – If m, v, beta1_power, beta2_power1, lr, beta1, beta2, epsilon or gradient is not a Tensor.

Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, Parameter
>>> from mindspore.ops import operations as ops
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.m = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
...                            name="m")
...         self.v = Parameter(Tensor(np.array([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]).astype(np.float32)),
...                            name="v")
...     def construct(self, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad):
...         out = self.adam(self.m, self.v, beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad)
...         return out
>>> net = Net()
>>> beta1_power = Tensor(0.9, ms.float32)
>>> beta2_power = Tensor(0.999, ms.float32)
>>> lr = Tensor(0.001, ms.float32)
>>> beta1 = Tensor(0.9, ms.float32)
>>> beta2 = Tensor(0.999, ms.float32)
>>> epsilon = Tensor(1e-8, ms.float32)
>>> gradient = Tensor(np.array([[0.1, 0.1, 0.1], [0.1, 0.1, 0.1]]).astype(np.float32))
>>> result = net(beta1_power, beta2_power, lr, beta1, beta2, epsilon, gradient)
>>> print(result)
[[-0.00010004 -0.00010004 -0.00010004]
[-0.00013441 -0.00013441 -0.00013441]]