mindflow.common.get_poly_lr

mindflow.common.get_poly_lr(global_step, lr_init, lr_end, lr_max, warmup_steps, total_steps, poly_power)[source]

Generate polynomial decay learning rate array. The learning rate decays in a polynomial manner as training goes along. it follows \(lr = step * (lr_max - lr_init)/warmup_steps\) , then \(lr = lr_end + (lr_max - lr_end) * [(1- i + step)/(total_steps - warmup_steps)]**poly_power\)

Parameters
  • global_step (int) – current step number, non-negtive int value.

  • lr_init (float) – init learning rate, positive float value.

  • lr_end (float) – end learning rate, non-negtive float value.

  • lr_max (float) – max learning rate, positive float value.

  • warmup_steps (int) – number of warmup epochs, non-negtive int value.

  • total_steps (int) – total epoch of training, positive int value.

  • poly_power (float) – poly learning rate power, positive float value.

Returns

Numpy.array, learning rate array.

Supported Platforms:

Ascend GPU

Examples

>>> from mindflow.common import get_poly_lr
>>> learning_rate = get_poly_lr(100, 0.001, 0.1, 0.0001, 1000, 10000, 0.5)
>>> print(learning_rate.shape)
(9900,)