{ "cells": [ { "cell_type": "markdown", "id": "b5a1ca37", "metadata": {}, "source": [ "# 优化器\n", "\n", "`Ascend` `GPU` `CPU` `模型开发`\n", "\n", "[![在线运行](https://gitee.com/mindspore/docs/raw/r1.6/resource/_static/logo_modelarts.png)](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvcHJvZ3JhbW1pbmdfZ3VpZGUvemhfY24vbWluZHNwb3JlX29wdGltLmlweW5i&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) [![下载Notebook](https://gitee.com/mindspore/docs/raw/r1.6/resource/_static/logo_notebook.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r1.6/programming_guide/zh_cn/mindspore_optim.ipynb) [![下载样例代码](https://gitee.com/mindspore/docs/raw/r1.6/resource/_static/logo_download_code.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r1.6/programming_guide/zh_cn/mindspore_optim.py) [![查看源文件](https://gitee.com/mindspore/docs/raw/r1.6/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r1.6/docs/mindspore/programming_guide/source_zh_cn/optim.ipynb)\n", "\n", "## 概述\n", "\n", "优化器在模型训练过程中,用于计算和更新网络参数,合适的优化器可以有效减少训练时间,提高最终模型性能。最基本的优化器是梯度下降(SGD),在此基础上,很多其他的优化器进行了改进,以实现目标函数能更快速更有效地收敛到全局最优点。\n", "\n", "`mindspore.nn.optim`是MindSpore框架中实现各种优化算法的模块,包含常用的优化器、学习率等,接口具备较好的通用性,可以将以后更新、更复杂的方法集成到模块里。`mindspore.nn.optim`为模型提供常用的优化器,如`mindspore.nn.SGD`、`mindspore.nn.Adam`、`mindspore.nn.Ftrl`、`mindspore.nn.LazyAdam`、`mindspore.nn.Momentum`、`mindspore.nn.RMSProp`、`mindspore.nn.LARS`、`mindspore.nn.ProximalAadagrad`和`mindspore.nn.Lamb`等。同时`mindspore.nn`提供了动态学习率的模块,分为`dynamic_lr`和`learning_rate_schedule`,学习率的灵活设置可以有效支撑目标函数的收敛和模型的训练。\n", "\n", "使用`mindspore.nn.optim`时,我们需要构建一个Optimizer实例。这个实例能够保持当前参数状态并基于计算得到的梯度进行参数更新。为了构建一个Optimizer,要指定需要优化的网络权重(必须是Parameter实例)的iterable,然后设置Optimizer的参数选项,比如学习率,权重衰减等。\n", "\n", "以下内容分别从权重学习率、权重衰减、参数分组、混合精度等方面的配置分别进行详细介绍。\n", "\n", "## 权重配置\n", "\n", "在构建Optimizer实例时,通过`params`配置模型网络中要训练和更新的权重。`params`必须配置,常见的配置方法有以下两种。\n", "\n", "### 使用Cell的网络权重获取函数\n", "\n", "`Parameter`类中包含了一个`requires_grad`的布尔型的类属性,表征了模型网络中的权重是否需要梯度来进行更新(详情可参考: )。其中大部分权重的`requires_grad`的默认值都为True;少数默认为False,例如BatchNormalize中的`moving_mean`和`moving_variance`。用户可以根据需要,自行对`requires_grad`的值进行修改。\n", "\n", "MindSpore提供了`get_parameters`方法来获取模型网络中所有权重,该方法返回了`Parameter`类型的网络权重;`trainable_params`方法本质是一个filter,过滤了`requires grad=True`的`Parameter`。用户在构建优化器时,可以通过配置`params`为`net.trainable_params()`来指定需要优化和更新的权重。\n", "\n", "代码样例如下:" ] }, { "cell_type": "code", "execution_count": 1, "id": "4cbf2d4a", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import mindspore.ops as ops\n", "from mindspore import nn, Model, Tensor, Parameter\n", "\n", "class Net(nn.Cell):\n", " def __init__(self):\n", " super(Net, self).__init__()\n", " self.matmul = ops.MatMul()\n", " self.conv = nn.Conv2d(1, 6, 5, pad_mode=\"valid\")\n", " self.param = Parameter(Tensor(np.array([1.0], np.float32)))\n", "\n", " def construct(self, x):\n", " x = self.conv(x)\n", " x = x * self.param\n", " out = self.matmul(x, x)\n", " return out\n", "\n", "net = Net()\n", "optim = nn.Adam(params=net.trainable_params())" ] }, { "cell_type": "markdown", "id": "693f039e", "metadata": {}, "source": [ "### 自定义筛选\n", "\n", "用户也可以设定筛选条件,在使用`get_parameters`获取到网络全部参数后,通过限定参数名字等方法,自定义filter来决定哪些参数需要更新。例如下面的例子,训练过程中将只对非卷积参数进行更新:" ] }, { "cell_type": "code", "execution_count": 2, "id": "a7687615", "metadata": {}, "outputs": [], "source": [ "from mindspore import nn\n", "\n", "params_all = net.get_parameters()\n", "no_conv_params = list(filter(lambda x: 'conv' not in x.name, params_all))\n", "optim = nn.Adam(params=no_conv_params, learning_rate=0.1, weight_decay=0.0)" ] }, { "cell_type": "markdown", "id": "23cfcaf5", "metadata": {}, "source": [ "## 学习率\n", "\n", "学习率作为机器学习及深度学习中常见的超参,对目标函数能否收敛到局部最小值及何时收敛到最小值有重要作用。学习率过大容易导致目标函数波动较大,难以收敛到最优值,太小则会导致收敛过程耗时长,除了基本的固定值设置,很多动态学习率的设置方法也在深度网络的训练中取得了显著的效果。\n", "\n", "### 固定学习率\n", "\n", "使用固定学习率时,优化器传入的`learning_rate`为浮点类型或标量Tensor。\n", "\n", "以Momentum为例,固定学习率为0.01,用法如下:" ] }, { "cell_type": "code", "execution_count": 3, "id": "274c2abe", "metadata": {}, "outputs": [], "source": [ "from mindspore import nn\n", "\n", "net = Net()\n", "optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.01, momentum=0.9)\n", "loss = nn.SoftmaxCrossEntropyWithLogits()\n", "model = Model(net, loss_fn=loss, optimizer=optim)" ] }, { "cell_type": "markdown", "id": "e3e7582c", "metadata": {}, "source": [ "### 动态学习率\n", "\n", "模块提供了动态学习率的两种不同的实现方式,`dynamic_lr`和`learning_rate_schedule`:\n", "\n", "- `dynamic_lr`: 预生成长度为`total_step`的学习率列表,将列表传入优化器中使用, 训练过程中, 第i步使用第i个学习率的值作为当前step的学习率,其中,`total_step`的设置值不能小于训练的总步数;\n", "\n", "- `learning_rate_schedule`: 优化器学习率指定一个LearningRateSchedule的Cell实例,学习率会和训练网络一起组成计算图,在执行过程中,根据step计算出当前学习率。\n", "\n", "#### 预生成学习率列表\n", "\n", "本模块中的动态学习率为函数,配合优化器使用时,调用函数并将结果列表`lr`传递给优化器。在训练过程中,优化器将 `lr[current_step]`作为当前学习率。\n", "\n", "`mindspore.nn.dynamic_lr`支持以下不同的实现方式:\n", "\n", "- `piecewise_constant_lr`:基于得到分段不变的学习速率。\n", "\n", "- `exponential_decay_lr`:基于指数衰减函数计算学习率。\n", "\n", "- `natural_exp_decay_lr`:基于自然指数衰减函数计算学习率。\n", "\n", "- `inverse_decay_lr`:基于反时间衰减函数计算学习速率。\n", "\n", "- `cosine_decay_lr`:基于余弦衰减函数计算学习率。\n", "\n", "- `polynomial_decay_lr`:基于多项式衰减函数计算学习率。\n", "\n", "- `warmup_lr`:提高学习率。\n", "\n", "以`piecewise_constant_lr`为例:" ] }, { "cell_type": "code", "execution_count": 4, "id": "6183b5f2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0.1, 0.1, 0.05, 0.05, 0.05, 0.01, 0.01, 0.01, 0.01, 0.01]\n" ] } ], "source": [ "from mindspore import nn\n", "\n", "milestone = [2, 5, 10]\n", "learning_rates = [0.1, 0.05, 0.01]\n", "lr = nn.dynamic_lr.piecewise_constant_lr(milestone, learning_rates)\n", "print(lr)\n", "\n", "net = Net()\n", "optim = nn.SGD(net.trainable_params(), learning_rate=lr)" ] }, { "cell_type": "markdown", "id": "5a3e027d", "metadata": {}, "source": [ "#### 定义学习率计算图\n", "\n", "本模块中的动态学习率为`LearningRateSchedule` 的子类,配合优化器使用时,将对应的`LearningRateSchedule`子类的实例传递给优化器。在训练过程中,优化器调用以`current_step`为输入的实例,以获得当前的学习率。\n", "\n", "`mindspore.nn.learning_rate_schedule`模块包含以下实现方式:\n", "\n", "- `ExponentialDecayLR`:基于指数衰减函数计算学习率。\n", "\n", "- `NaturalExpDecayLR`:基于自然指数衰减函数计算学习率。\n", "\n", "- `InverseDecayLR`:基于反时间衰减函数计算学习速率。\n", "\n", "- `CosineDecayLR`:基于余弦衰减函数计算学习率。\n", "\n", "- `PolynomialDecayLR`:基于多项式衰减函数计算学习率。\n", "\n", "- `WarmUpLR`:提高学习率。\n", "\n", "以`ExponentialDecayLR`为例:\n", "\n", "> ExponentialDecayLR仅支持GPU和Ascend后端。" ] }, { "cell_type": "code", "execution_count": 5, "id": "1eb3c683", "metadata": {}, "outputs": [], "source": [ "from mindspore import nn\n", "from mindspore import Tensor\n", "\n", "polynomial_decay_lr = nn.learning_rate_schedule.PolynomialDecayLR(learning_rate=0.1,\n", " end_learning_rate=0.01,\n", " decay_steps=4,\n", " power=0.5)\n", "\n", "net = Net()\n", "optim = nn.SGD(net.trainable_params(), learning_rate=polynomial_decay_lr)" ] }, { "cell_type": "markdown", "id": "5921387d", "metadata": {}, "source": [ "## 权重衰减\n", "\n", "一般情况下,weight_decay取值范围为[0, 1),实现对(BatchNorm以外的)参数使用权重衰减的策略,以避免模型过拟合问题;weight_decay的默认值为0.0,此时不使用权重衰减策略。" ] }, { "cell_type": "code", "execution_count": 6, "id": "5b501bc9", "metadata": {}, "outputs": [], "source": [ "net = Net()\n", "optimizer = nn.Momentum(net.trainable_params(), learning_rate=0.01, momentum=0.9, weight_decay=0.9)" ] }, { "cell_type": "markdown", "id": "2185b86c", "metadata": {}, "source": [ "## 参数分组\n", "\n", "优化器也支持为不同参数单独设置选项,此时不直接传入变量,而是传入一个字典的列表,每个字典定义一个参数组别的设置值,key可以为“params”,“lr”,“weight_decay”,”grad_centralizaiton”,value为对应的设定值。`params`必须配置,其余参数可以选择配置,未配置的参数项,将采用定义优化器时设置的参数值。\n", "\n", "分组时,学习率可以使用固定学习率,也可以使用dynamic_lr和learningrate_schedule动态学习率。\n", "\n", "> 当前MindSpore除个别优化器外(例如AdaFactor,FTRL),均支持对学习率进行分组,详情参考各优化器的说明。\n", "\n", "例如下面的例子:\n", "\n", "- conv_params组别的参数,使用固定学习率0.01, `weight_decay`为字典传入的数值0.01;\n", "\n", "- no_conv_params组别使用`learning_rate_schedule`的动态学习率`PolynomialDecayLR`, `weight_decay`使用优化器配置的值0.0;\n", "\n", "- group_params还提供了`order_params`配置项;一般情况下`order_params`无需配置。group_params根据分组情况会改变parameter和梯度的计算顺序。如果使用自动并行策略,并通过`set_all_reduce_fusion_split_indices`配置了梯度更新的切分点,group_params引起的顺序变化会影响梯度广播的并行效果,此时可以通过`order_params`指定预期的参数顺序,例如指定为`net.trainable_params()`,使参数顺序与网络中定义权重的原始顺序保持一致。" ] }, { "cell_type": "code", "execution_count": 7, "id": "8040db83", "metadata": {}, "outputs": [], "source": [ "from mindspore import nn\n", "\n", "net = Net()\n", "\n", "# Use parameter groups and set different values\n", "conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))\n", "no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))\n", "\n", "fix_lr = 0.01\n", "polynomial_decay_lr = nn.learning_rate_schedule.PolynomialDecayLR(learning_rate=0.1,\n", " end_learning_rate=0.01,\n", " decay_steps=4,\n", " power=0.5)\n", "\n", "group_params = [{'params': conv_params, 'weight_decay': 0.01, 'lr': fix_lr},\n", " {'params': no_conv_params, 'lr': polynomial_decay_lr},\n", " {'order_params': net.trainable_params()}]\n", "\n", "optim = nn.Momentum(group_params, learning_rate=0.1, momentum=0.9, weight_decay=0.0)" ] }, { "cell_type": "markdown", "id": "bd55f494", "metadata": {}, "source": [ "## 混合精度\n", "\n", "深度神经网络存在使用混合精度训练的场景,这种方法通过混合使用单精度和半精度数据格式来加速网络训练,同时保持了单精度训练所能达到的网络精度。混合精度训练能够加速计算过程,减少内存使用和存取,并使得在特定的硬件上可以训练更大的模型或batch size。\n", "\n", "在混合精度训练过程中,会使用float16类型来替代float32类型存储数据,但由于float16类型数据比float32类型数据范围小很多,所以当某些参数(例如梯度)在训练过程中变得很小时,就会发生数据下溢。为避免半精度float16类型数据下溢,MindSpore提供了`FixedLossScaleManager`和`DynamicLossScaleManager`方法。其主要思想是计算loss时,将loss扩大一定的倍数,由于链式法则的存在,梯度也会相应扩大,然后在优化器更新权重时再缩小相应的倍数,从而避免了数据下溢的情况又不影响计算结果。\n", "\n", "一般情况下优化器不需要与`LossScale`功能配合使用,但使用`FixedLossScaleManager`,并且`drop_overflow_update`为False时,优化器需设置`loss_scale`的值,且`loss_scale`值与`FixedLossScaleManager`的相同,具体用法详见:。" ] } ], "metadata": { "kernelspec": { "display_name": "MindSpore", "language": "python", "name": "mindspore" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 5 }