mindspore.profiler.schedule
- class mindspore.profiler.schedule(*, wait: int, active: int, warmup: int = 0, repeat: int = 0, skip_first: int = 0)
This class use to get the actions of each step. The schedule is as follows:
(NONE) (NONE) (NONE) (WARM_UP) (RECORD) (RECORD) (RECORD_AND_SAVE) None START------->skip_first------->wait-------->warmup-------->active........active.........active----------->stop | | | repeat_1 | ---------------------------------------------------------------
The profiler will skip the first
skip_first
steps, then wait forwait
steps, then do the warmup for the nextwarmup
steps, then do the active recording for the nextactive
steps and then repeat the cycle starting withwait
steps. The optional number of cycles is specified with therepeat
parameter, the zero value means that the cycles will continue until the profiling is finished.- Keyword Arguments
wait (int) – The number of steps to wait before starting the warm-up phase. must be greater than or equal to 0. If the wait parameter is not set externally, it is set to
0
when the schedule class is initialized.active (int) – The number of steps to record data during the active phase. must be greater than or equal to 1. If the active parameter is not set externally, it is set to
1
when the schedule class is initialized.warmup (int, optional) – The number of steps to perform the warm-up phase. must be greater than or equal to 0. Default value:
0
.repeat (int, optional) – The number of times to repeat the cycle. If repeat is set to 0, the Profiler will determine the repeat value based on the number of times the model is trained, which will generate one more performance data with incomplete collection. The data in the last step is abnormal data that users do not need to pay attention to. Default value:
0
.skip_first (int, optional) – The number of steps to skip at the beginning. Must be greater than or equal to 0. Default value:
0
- Raises
ValueError – When the parameter step is less than 0.
- Supported Platforms:
Ascend
Examples
>>> import numpy as np >>> import mindspore >>> import mindspore.dataset as ds >>> from mindspore import context, nn >>> from mindspore.profiler import ProfilerLevel, AicoreMetrics, ExportType, ProfilerActivity >>> >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.fc = nn.Dense(2, 2) ... ... def construct(self, x): ... return self.fc(x) >>> >>> def generator_net(): ... for _ in range(2): ... yield np.ones([2, 2]).astype(np.float32), np.ones([2]).astype(np.int32) >>> >>> def train(test_net): ... optimizer = nn.Momentum(test_net.trainable_params(), 1, 0.9) ... loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True) ... data = ds.GeneratorDataset(generator_net(), ["data", "label"]) ... model = mindspore.train.Model(test_net, loss, optimizer) ... model.train(1, data) >>> >>> if __name__ == '__main__': ... # If the device_target is GPU, set the device_target to "GPU" ... context.set_context(mode=mindspore.GRAPH_MODE) ... mindspore.set_device("Ascend") ... ... # Init Profiler ... experimental_config = mindspore.profiler._ExperimentalConfig( ... profiler_level=ProfilerLevel.Level0, ... aic_metrics=AicoreMetrics.AiCoreNone, ... l2_cache=False, ... mstx=False, ... data_simplification=False, ... export_type=[ExportType.Text]) ... steps = 10 ... net = Net() ... # Note that the Profiler should be initialized before model.train ... with mindspore.profiler.profile(activities=[ProfilerActivity.CPU, ProfilerActivity.NPU], ... schedule=mindspore.profiler.schedule(wait=1, warmup=1, active=2, ... repeat=1, skip_first=2), ... on_trace_ready=mindspore.profiler.tensorboard_trace_handler("./data"), ... profile_memory=False, ... experimental_config=experimental_config) as prof: ... ... # Train Model ... for step in range(steps): ... train(net) ... prof.step()
- to_dict()
Convert schedule to a dict.
- Returns
dict, the parameters of schedule and their values.