mindspore.nn.piecewise_constant_lr

View Source On Gitee
mindspore.nn.piecewise_constant_lr(milestone, learning_rates)[source]

Get piecewise constant learning rate. The learning rate for each step will be stored in a list.

Calculate learning rate by the given milestone and learning_rates. Let the value of milestone be \((M_1, M_2, ..., M_t, ..., M_N)\) and the value of learning_rates be \((x_1, x_2, ..., x_t, ..., x_N)\). N is the length of milestone. Let the output learning rate be \(y[i]\), then for the \(i\)-th step, the formula of computing decayed_learning_rate[i] is:

\[y[i] = x_t,\ for\ i \in [M_{t-1}, M_t)\]
Parameters
  • milestone (Union[list[int], tuple[int]]) – A list of milestone. When the specified step is reached, use the corresponding learning_rates. This list is a monotone increasing list. Every element in the list must be greater than 0.

  • learning_rates (Union[list[float], tuple[float]]) – A list of learning rates.

Returns

list[float]. The size of list is \(M_N\).

Raises
  • TypeError – If milestone or learning_rates is neither a tuple nor a list.

  • ValueError – If the length of milestone and learning_rates is not same.

  • ValueError – If the value in milestone is not monotonically decreasing.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.nn as nn
>>>
>>> milestone = [2, 5, 10]
>>> learning_rates = [0.1, 0.05, 0.01]
>>> lr = nn.piecewise_constant_lr(milestone, learning_rates)
>>> # learning_rates = 0.1  if step <= 2
>>> # learning_rates = 0.05  if 2 < step <= 5
>>> # learning_rates = 0.01  if 5 < step <= 10
>>> net = nn.Dense(2, 3)
>>> optim = nn.SGD(net.trainable_params(), learning_rate=lr)