mindspore.experimental.optim.lr_scheduler.MultiStepLR

View Source On Gitee
class mindspore.experimental.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=- 1)[source]

Multiply the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones. Notice that such change can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr.

Warning

This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in Experimental Optimizer .

Parameters
  • optimizer (mindspore.experimental.optim.Optimizer) – Wrapped optimizer.

  • milestones (list) – List of epoch indices. When last_epoch reach the milestone, multiply the learning rate of each parameter group by gamma.

  • gamma (float, optional) – Multiplicative factor of learning rate decay. Default: 0.1.

  • last_epoch (int, optional) – The index of the last epoch. Default: -1.

Raises
  • TypeError – If the milestones is not list.

  • TypeError – If elements of the milestones are not int.

  • TypeError – If the gamma is not float.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import nn
>>> from mindspore.experimental import optim
>>> net = nn.Dense(2, 3)
>>> optimizer = optim.Adam(net.trainable_params(), 0.05)
>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05     if epoch < 2
>>> # lr = 0.005    if 2 <= epoch < 4
>>> # lr = 0.0005   if epoch >= 4
>>> scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[2,4], gamma=0.1)
>>> for i in range(6):
...     scheduler.step()
...     current_lr = scheduler.get_last_lr()
...     print(current_lr)
[Tensor(shape=[], dtype=Float32, value= 0.05)]
[Tensor(shape=[], dtype=Float32, value= 0.005)]
[Tensor(shape=[], dtype=Float32, value= 0.005)]
[Tensor(shape=[], dtype=Float32, value= 0.0005)]
[Tensor(shape=[], dtype=Float32, value= 0.0005)]
[Tensor(shape=[], dtype=Float32, value= 0.0005)]