mindspore.experimental.optim.lr_scheduler.ExponentialLR

View Source On Gitee
class mindspore.experimental.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=- 1)[source]

For each epoch, the learning rate decays exponentially, multiplied by gamma. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

Warning

This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in Experimental Optimizer .

Parameters
Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import nn
>>> from mindspore.experimental import optim
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.fc = nn.Dense(16 * 5 * 5, 120)
...     def construct(self, x):
...         return self.fc(x)
>>> net = Net()
>>> optimizer = optim.Adam(net.trainable_params(), 0.01)
>>> scheduler = optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.5)
>>> for i in range(3):
...     scheduler.step()
...     current_lr = scheduler.get_last_lr()
...     print(current_lr)
[Tensor(shape=[], dtype=Float32, value= 0.005)]
[Tensor(shape=[], dtype=Float32, value= 0.0025)]
[Tensor(shape=[], dtype=Float32, value= 0.00125)]