mindspore.amp.FixedLossScaleManager
- class mindspore.amp.FixedLossScaleManager(loss_scale=128.0, drop_overflow_update=True)[source]
Loss scale (magnification factor of gradients when mixed precision is used) manager with a fixed loss scale value, inherits from
mindspore.amp.LossScaleManager.- Parameters
loss_scale (float, optional) – Magnification factor of gradients. Note that if drop_overflow_update is set to
False, the value of loss_scale in optimizer should be set to the same as here. Default:128.0.drop_overflow_update (bool, optional) – Whether to execute optimizer if there is an overflow. If
True, the optimizer will not be executed when overflow occurs. Default:True.
Examples
>>> import mindspore as ms >>> from mindspore import amp, nn >>> >>> # Define the network structure of LeNet5. Refer to >>> # https://atomgit.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> loss_scale = 1024.0 >>> loss_scale_manager = amp.FixedLossScaleManager(loss_scale, False) >>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9, loss_scale=loss_scale) >>> model = ms.Model(net, loss_scale_manager=loss_scale_manager, optimizer=optim)
- get_drop_overflow_update()[source]
Get drop_overflow_update, whether to drop optimizer update for current step when there is an overflow.
- Returns
bool, drop_overflow_update value.
- get_update_cell()[source]
Returns the instance of
mindspore.nn.Cellthat is used to update the loss scale which will be called atmindspore.nn.TrainOneStepWithLossScaleCell. As the loss scale is fixed in this class, the instance will do nothing.- Returns
None or
mindspore.nn.FixedLossScaleUpdateCell. Instance ofmindspore.nn.FixedLossScaleUpdateCellwhen drop_overflow_update isTrue. None when drop_overflow_update isFalse.
- update_loss_scale(overflow)[source]
Update loss scale value. The interface at
mindspore.amp.FixedLossScaleManagerwill do nothing.- Parameters
overflow (bool) – Whether it overflows.