mindspore.nn.learning_rate_schedule

Learning rate schedule.

class mindspore.nn.learning_rate_schedule.CosineDecayLR(min_lr, max_lr, decay_steps)[source]

Calculate learning rate base on cosine decay function.

For the i-th step, the formula of computing decayed_learning_rate[i] is:

\[decayed\_learning\_rate[i] = min\_learning\_rate + 0.5 * (max\_learning\_rate - min\_learning\_rate) * (1 + cos(\frac{current\_step}{decay\_steps}\pi))\]
Parameters
  • min_lr (float) – The minimum value of learning rate.

  • max_lr (float) – The maximum value of learning rate.

  • decay_steps (int) – A value used to calculate decayed learning rate.

Inputs:

Tensor. The current step number.

Returns

Tensor. The learning rate value for the current step.

Examples

>>> min_lr = 0.01
>>> max_lr = 0.1
>>> decay_steps = 4
>>> global_step = Tenosr(2, mstype.int32)
>>> cosine_decay_lr = CosineDecayLR(min_lr, max_lr, decay_steps)
>>> cosine_decay_lr(global_steps)
class mindspore.nn.learning_rate_schedule.ExponentialDecayLR(learning_rate, decay_rate, decay_steps, is_stair=False)[source]

Calculate learning rate base on exponential decay function.

For the i-th step, the formula of computing decayed_learning_rate[i] is:

\[decayed\_learning\_rate[i] = learning\_rate * decay\_rate^{p}\]

Where \(p = \frac{current\_step}{decay\_steps}\), if is_stair is True, The formula is \(p = floor(\frac{current\_step}{decay\_steps})\).

Parameters
  • learning_rate (float) – The initial value of learning rate.

  • decay_rate (float) – The decay rate.

  • decay_steps (int) – A value used to calculate decayed learning rate.

  • is_stair (bool) – If true, learning rate decay once every decay_steps times. Default: False.

Inputs:

Tensor. The current step number.

Returns

Tensor. The learning rate value for the current step.

Examples

>>> learning_rate = 0.1
>>> decay_rate = 0.9
>>> decay_steps = 4
>>> global_step = Tenosr(2, mstype.int32)
>>> exponential_decay_lr = ExponentialDecayLR(learning_rate, decay_rate, decay_steps)
>>> exponential_decay_lr(global_step)
class mindspore.nn.learning_rate_schedule.InverseDecayLR(learning_rate, decay_rate, decay_steps, is_stair=False)[source]

Calculate learning rate base on inverse-time decay function.

For the i-th step, the formula of computing decayed_learning_rate[i] is:

\[decayed\_learning\_rate[i] = learning\_rate / (1 + decay\_rate * p)\]

Where \(p = \frac{current\_step}{decay\_steps}\), if is_stair is True, The formula is \(p = floor(\frac{current\_step}{decay\_steps})\).

Parameters
  • learning_rate (float) – The initial value of learning rate.

  • decay_rate (float) – The decay rate.

  • decay_steps (int) – A value used to calculate decayed learning rate.

  • is_stair (bool) – If true, learning rate decay once every decay_steps times. Default: False.

Inputs:

Tensor. The current step number.

Returns

Tensor. The learning rate value for the current step.

Examples

>>> learning_rate = 0.1
>>> decay_rate = 0.9
>>> decay_steps = 4
>>> global_step = Tenosr(2, mstype.int32)
>>> inverse_decay_lr = InverseDecayLR(learning_rate, decay_rate, decay_steps, True)
>>> inverse_decay_lr(global_step)
class mindspore.nn.learning_rate_schedule.NaturalExpDecayLR(learning_rate, decay_rate, decay_steps, is_stair=False)[source]

Calculate learning rate base on natural exponential decay function.

For the i-th step, the formula of computing decayed_learning_rate[i] is:

\[decayed\_learning\_rate[i] = learning\_rate * e^{-decay\_rate * p}\]

Where \(p = \frac{current\_step}{decay\_steps}\), if is_stair is True, The formula is \(p = floor(\frac{current\_step}{decay\_steps})\).

Parameters
  • learning_rate (float) – The initial value of learning rate.

  • decay_rate (float) – The decay rate.

  • decay_steps (int) – A value used to calculate decayed learning rate.

  • is_stair (bool) – If true, learning rate decay once every decay_steps times. Default: False.

Inputs:

Tensor. The current step number.

Returns

Tensor. The learning rate value for the current step.

Examples

>>> learning_rate = 0.1
>>> decay_rate = 0.9
>>> decay_steps = 4
>>> global_step = Tenosr(2, mstype.int32)
>>> natural_exp_decay_lr = NaturalExpDecayLR(learning_rate, decay_rate, decay_steps, True)
>>> natural_exp_decay_lr(global_step)
class mindspore.nn.learning_rate_schedule.PolynomialDecayLR(learning_rate, end_learning_rate, decay_steps, power, update_decay_steps=False)[source]

Calculate learning rate base on polynomial decay function.

For the i-th step, the formula of computing decayed_learning_rate[i] is:

\[decayed\_learning\_rate[i] = (learning\_rate - end\_learning\_rate) * (1 - tmp\_step / tmp\_decay\_steps)^{power} + end\_learning\_rate\]

Where \(tmp\_step=min(current\_step, decay\_steps). If \) is true, update the value of tmp_decay_step every decay_steps. The formula is \(tmp\_decay\_steps = decay\_steps * ceil(current\_step / decay\_steps)\)

Parameters
  • learning_rate (float) – The initial value of learning rate.

  • end_learning_rate (float) – The end value of learning rate.

  • decay_steps (int) – A value used to calculate decayed learning rate.

  • power (float) – A value used to calculate decayed learning rate. This parameter should be greater than 0.

  • update_decay_steps (bool) – If true, learning rate decay once every decay_steps times. Default: False.

Inputs:

Tensor. The current step number.

Returns

Tensor. The learning rate value for the current step.

Examples

>>> learning_rate = 0.1
>>> end_learning_rate = 0.01
>>> decay_steps = 4
>>> power = 0.5
>>> global_step = Tenosr(2, mstype.int32)
>>> polynomial_decay_lr = PolynomialDecayLR(learning_rate, end_learning_rate, decay_steps, power)
>>> polynomial_decay_lr(global_step)
class mindspore.nn.learning_rate_schedule.WarmUpLR(learning_rate, warmup_steps)[source]

Get learning rate warming up.

For the i-th step, the formula of computing warmup_learning_rate[i] is:

\[warmup\_learning\_rate[i] = learning\_rate * tmp\_step / warmup\_steps\]

Where \(tmp\_step=min(current\_step, warmup\_steps)\).

Parameters
  • learning_rate (float) – The initial value of learning rate.

  • warmup_steps (int) – The warm up steps of learning rate.

Inputs:

Tensor. The current step number.

Returns

Tensor. The learning rate value for the current step.

Examples

>>> learning_rate = 0.1
>>> warmup_steps = 2
>>> global_step = Tenosr(2, mstype.int32)
>>> warmup_lr = WarmUpLR(learning_rate, warmup_steps)
>>> warmup_lr(global_step)