mindspore.train.EarlyStopping

class mindspore.train.EarlyStopping(monitor='eval_loss', min_delta=0, patience=0, verbose=False, mode='auto', baseline=None, restore_best_weights=False)[source]

Stop training when a monitored metric has stopped improving.

Assuming monitor is “accuracy”, with this, mode would be “max” since goal of trianing is to maximize the accuracy, the model.fit() training loop will check at end of epoch whether the accuracy is no longer increasing, considering the min_delta and patience if applicable. Once it’s found no longer increasing, run_context.request_stop() will be called and the training terminates.

Parameters
  • monitor (str) – quantity to be monitored. If evaluation is performed on the end of train epochs, the valid monitors can be “loss”, “eval_loss” or metric names passed when instantiate the Model; otherwise the valid monitor is “loss”. When monitor is “loss”, if train network has multiple outputs, the first element will be returned as training loss. Default: “eval_loss”.

  • patience (int) – monitor value is better than history best value over min_delta is seen as improvement, patience is number of epochs with no improvement that would be waited. When the waiting counter self.wait is larger than or equal to patience, the training process will be stopped. Default: 0.

  • verbose (bool) – If False: quiet, if True: print related information. Default: True.

  • mode (str) – one of {‘auto’, ‘min’, ‘max’}. In “min” mode, the learning rate will be reduced when the quantity monitored has stopped decreasing; in “max” mode it will be reduced when the quantity monitored has stopped increasing; in “auto” mode, the direction is automatically inferred from the name of the monitored quantity. Default: “auto”.

  • min_delta (float) – threshold for measuring the new optimum, to only focus on significant changes. Default: 0.

  • baseline (float) – Baseline value for the monitor. When the monitor value shows improvement over the history best value and the baseline, the internal wait counter will be set to zero. Default: None.

  • restore_best_weights (bool) – Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. Default: False.

Raises
  • ValueErrormode not in ‘auto’, ‘min’ or ‘max’.

  • ValueError – The monitor value is not a scalar.

Examples

Note

Before running the following example, you need to customize the network LeNet5 and dataset preparation function create_dataset. Refer to Building a Network and Dataset .

>>> from mindspore import nn
>>> from mindspore.train import Model, EarlyStopping
>>> net = LeNet5()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
>>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics={"acc"})
>>> data_path = './MNIST_Data'
>>> dataset = create_dataset(data_path)
>>> cb = EarlyStopping(monitor="acc", patience=3, verbose=True)
>>> model.fit(10, dataset, callbacks=cb)
on_train_begin(run_context)[source]

Initialize variables at the begin of training.

Parameters

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_end(run_context)[source]

If verbose is True, print the stopped epoch.

Parameters

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.

on_train_epoch_end(run_context)[source]

monitors the training process and if no improvement is seen for a ‘patience’ number of epochs, the training process will be stopped.

Parameters

run_context (RunContext) – Context information of the model. For more details, please refer to mindspore.train.RunContext.