mindspore.experimental.optim.lr_scheduler.LRScheduler

View Source On Gitee
class mindspore.experimental.optim.lr_scheduler.LRScheduler(optimizer, last_epoch=- 1)[source]

Basic class of learning rate schedule.

Warning

This is an experimental lr scheduler module that is subject to change. This module must be used with optimizers in Experimental Optimizer .

Parameters
Raises
  • TypeError – If optimizer is not an Optimizer.

  • KeyError – If last_epoch != -1 and 'initial_lr' not in param groups.

  • ValueError – if last_epoch is not int.

  • ValueError – If last_epoch is not greater than -1.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import nn
>>> from mindspore.experimental import optim
>>>
>>> class ConstantLR(optim.lr_scheduler.LRScheduler):
...     def __init__(self, optimizer, factor=0.5, total_iters=3, last_epoch=-1):
...         self.factor = factor
...         self.total_iters = total_iters
...         super(ConstantLR, self).__init__(optimizer, last_epoch)
...
...     def get_lr(self):
...         if self.last_epoch == 0:
...             return [lr * self.factor for lr in self._last_lr]
...         if self.last_epoch != self.total_iters:
...             return [lr * 1. for lr in self._last_lr]
...         return [lr / self.factor for lr in self._last_lr]
>>>
>>> net = nn.Dense(8, 2)
>>> optimizer = optim.SGD(net.trainable_params(), 0.01)
>>> scheduler = ConstantLR(optimizer)
>>> for i in range(4):
...     scheduler.step()
...     current_lr = scheduler.get_last_lr()
...     print(current_lr)
[Tensor(shape=[], dtype=Float32, value= 0.005)]
[Tensor(shape=[], dtype=Float32, value= 0.005)]
[Tensor(shape=[], dtype=Float32, value= 0.01)]
[Tensor(shape=[], dtype=Float32, value= 0.01)]
get_last_lr()[source]

Return last computed learning rate by current scheduler.

step(epoch=None)[source]

Get the current learning rate and change the learning rate.

Parameters

epoch (int, optional) – The index of the last epoch. Default: None.