{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 优化算法\n", "\n", "[![](https://gitee.com/mindspore/docs/raw/r1.2/docs/programming_guide/source_zh_cn/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r1.2/docs/programming_guide/source_zh_cn/optim.ipynb) [![](https://gitee.com/mindspore/docs/raw/r1.2/docs/programming_guide/source_zh_cn/_static/logo_notebook.png)](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/r1.2/programming_guide/mindspore_optim.ipynb) [![](https://gitee.com/mindspore/docs/raw/r1.2/docs/programming_guide/source_zh_cn/_static/logo_modelarts.png)](https://console.huaweicloud.com/modelarts/?region=cn-north-4#/notebook/loading?share-url-b64=aHR0cHM6Ly9vYnMuZHVhbHN0YWNrLmNuLW5vcnRoLTQubXlodWF3ZWljbG91ZC5jb20vbWluZHNwb3JlLXdlYnNpdGUvbm90ZWJvb2svbW9kZWxhcnRzL3Byb2dyYW1taW5nX2d1aWRlL21pbmRzcG9yZV9vcHRpbS5pcHluYg==&image_id=65f636a0-56cf-49df-b941-7d2a07ba8c8c)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 概述\n", "\n", "`mindspore.nn.optim`是MindSpore框架中实现各种优化算法的模块,包含常用的优化器、学习率等,并且接口具备足够的通用性,可以将以后更新、更复杂的方法集成到模块里。\n", "\n", "`mindspore.nn.optim`为模型提供常用的优化器,如`SGD`、`ADAM`、`Momentum`。优化器用于计算和更新梯度,模型优化算法的选择直接关系到最终模型的性能,如果有时候效果不好,未必是特征或者模型设计的问题,很有可能是优化算法的问题;同时还有`mindspore.nn`提供的学习率的模块,学习率分为`dynamic_lr`和`learning_rate_schedule`,都是动态学习率,但是实现方式不同,学习率是监督学习以及深度学习中最为重要的参数,其决定着目标函数是否能收敛到局部最小值以及何时能收敛到最小值。合适的学习率能够使目标函数在合适的的时间内收敛到局部最小值。\n", "\n", "> 本文档适用于CPU、GPU和Ascend环境。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 学习率\n", "\n", "### dynamic_lr\n", "\n", "`mindspore.nn.dynamic_lr`模块有以下几个类:\n", "\n", "- `piecewise_constant_lr`类:基于得到分段不变的学习速率。\n", "\n", "- `exponential_decay_lr`类:基于指数衰减函数计算学习率。\n", "\n", "- `natural_exp_decay_lr`类:基于自然指数衰减函数计算学习率。\n", "\n", "- `inverse_decay_lr`类:基于反时间衰减函数计算学习速率。\n", "\n", "- `cosine_decay_lr`类:基于余弦衰减函数计算学习率。\n", "\n", "- `polynomial_decay_lr`类:基于多项式衰减函数计算学习率。\n", "\n", "- `warmup_lr`类:提高学习率。\n", "\n", "它们是属于`dynamic_lr`的不同实现方式。\n", "\n", "例如`piecewise_constant_lr`类代码样例如下:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0.1, 0.1, 0.05, 0.05, 0.05, 0.01, 0.01, 0.01, 0.01, 0.01]\n" ] } ], "source": [ "from mindspore.nn.dynamic_lr import piecewise_constant_lr\n", "\n", "def test_dynamic_lr():\n", " milestone = [2, 5, 10]\n", " learning_rates = [0.1, 0.05, 0.01]\n", " lr = piecewise_constant_lr(milestone, learning_rates)\n", " print(lr)\n", "\n", "if __name__ == '__main__':\n", " test_dynamic_lr()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### learning_rate_schedule\n", "\n", "`mindspore.nn.learning_rate_schedule`模块下有以下几个类:`ExponentialDecayLR`类、`NaturalExpDecayLR`类、`InverseDecayLR`类、`CosineDecayLR`类、`PolynomialDecayLR`类和`WarmUpLR`类。它们都属于`learning_rate_schedule`,只是实现方式不同,各自含义如下:\n", "\n", "- `ExponentialDecayLR`类:基于指数衰减函数计算学习率。\n", "\n", "- `NaturalExpDecayLR`类:基于自然指数衰减函数计算学习率。\n", "\n", "- `InverseDecayLR`类:基于反时间衰减函数计算学习速率。\n", "\n", "- `CosineDecayLR`类:基于余弦衰减函数计算学习率。\n", "\n", "- `PolynomialDecayLR`类:基于多项式衰减函数计算学习率。\n", "\n", "- `WarmUpLR`类:提高学习率。\n", "\n", "它们是属于`learning_rate_schedule`的不同实现方式。\n", "\n", "例如`ExponentialDecayLR`类代码样例如下:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.094868325\n" ] } ], "source": [ "from mindspore.common import dtype as mstype\n", "from mindspore import Tensor\n", "from mindspore.nn.learning_rate_schedule import ExponentialDecayLR\n", "\n", "def test_learning_rate_schedule():\n", " learning_rate = 0.1 # learning_rate(float) - The initial value of learning rate.\n", " decay_rate = 0.9 # decay_rate(float) - The decay rate.\n", " decay_steps = 4 # decay_steps(int) - A value used to calculate decayed learning rate.\n", " global_step = Tensor(2, mstype.int32)\n", " exponential_decay_lr = ExponentialDecayLR(learning_rate, decay_rate, decay_steps)\n", " res = exponential_decay_lr(global_step)\n", " print(res)\n", "\n", "\n", "if __name__ == '__main__':\n", " test_learning_rate_schedule()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Optimzer\n", "\n", "### 如何使用\n", "\n", "为了使用`mindspore.nn.optim`,我们需要构建一个`Optimizer`对象。这个对象能够保持当前参数状态并基于计算得到的梯度进行参数更新。\n", "\n", "- 构建\n", "\n", "为了构建一个`Optimizer`,我们需要给它一个包含可需要优化的参数(必须是Variable对象)的iterable。然后,你可以设置Optimizer的参数选项,比如学习率,权重衰减等等。\n", "\n", "代码样例如下:\n", "\n", "```python\n", "from mindspore import nn\n", "\n", "optim = nn.SGD(group_params, learning_rate=0.1, weight_decay=0.0)\n", "optim = nn.Adam(params=net.trainable_params())\n", "\n", "optim = nn.Adam(group_params, learning_rate=0.1, weight_decay=0.0)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- 为每一个参数单独设置选项\n", "\n", "优化器也支持为每个参数单独设置选项。若想这么做,不要直接传入变量Variable,而是传入一个字典的iterable。每一个字典都分别定义了一组参数,并且包含一个key键,这个key键对应相应的参数value值。其他的key键应该是优化器所接受的其他参数,并且会被用于对这组参数的优化。\n", "\n", "我们仍然能够传递选项作为关键字参数,在未重写这些选项的组中,它们会被用作默认值。当你只想改动一个参数组的选项,但其他参数组的选项不变时,这是非常有用的。\n", "\n", "例如,当我们想制定每一层的学习率时,以`SGD`为例:\n", "\n", "```python\n", "from mindspore import nn\n", "\n", "optim = nn.SGD([{'params': conv_params, 'weight_decay': 0.01},\n", " {'params': no_conv_params, 'lr': 0.01},\n", " {'order_params': net.trainable_params()}],\n", " learning_rate=0.1, weight_decay=0.0)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "这段示例意味着当参数是`conv_params`时候,权重衰减使用的是0.01,学习率使用的是0.1;而参数是`no_conv_params`时候,权重衰减使用的是0.0,学习率使用的是0.01。这个学习率`learning_rate=0.1`会被用于所有分组里没有设置学习率的参数,权重衰减`weight_decay`也是如此。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 内置优化器\n", "\n", "深度学习优化算法大概常用的有`SGD`、`Adam`、`Ftrl`、`lazyadam`、`Momentum`、`RMSprop`、`Lars`、`Proximal_ada_grad`和`lamb`这几种。\n", "\n", "在`mindspore.nn.optim`模块中,他们都有对应的类实现。例如:\n", "\n", "- `SGD`,默认参数为纯SGD,设置`momentum`参数不为0,考虑了一阶动量,设置`nesterov`为True后变成`NAG`,即`Nesterov Accelerated Gradient`,在计算梯度时计算的是向前走一步所在位置的梯度。\n", "\n", "- `RMSprop`,考虑了二阶动量,对于不同的参数有不同的学习率,即自适应学习率,对`Adagrad`进行了优化,通过指数平滑只考虑一定窗口内的二阶动量。\n", "\n", "- `Adam`,同时考虑了一阶动量和二阶动量,可以看成`RMSprop`上进一步考虑了一阶动量。\n", "\n", "例如`SGD`的代码样例如下:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from mindspore import nn, Model, Tensor\n", "import mindspore.ops as ops\n", "import numpy as np\n", "from mindspore import dtype as mstype\n", "from mindspore import Parameter\n", "\n", "class Net(nn.Cell):\n", " def __init__(self):\n", " super(Net, self).__init__()\n", " self.matmul = ops.MatMul()\n", " self.conv = nn.Conv2d(1, 6, 5, pad_mode=\"valid\")\n", " self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')\n", " \n", " def construct(self, x, y):\n", " x = x * self.z\n", " out = self.matmul(x, y)\n", " return out\n", "\n", "net = Net()\n", "optim = nn.SGD(params=net.trainable_params())\n", "\n", "conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))\n", "no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))\n", "group_params = [{'params': conv_params, 'weight_decay': 0.01},\n", " {'params': no_conv_params, 'lr': 0.01},\n", " {'order_params': net.trainable_params()}]\n", "optim = nn.SGD(group_params, learning_rate=0.1, weight_decay=0.0)\n", "\n", "loss = nn.SoftmaxCrossEntropyWithLogits()\n", "model = Model(net, loss_fn=loss, optimizer=optim)" ] } ], "metadata": { "kernelspec": { "display_name": "MindSpore-1.1.1", "language": "python", "name": "mindspore-1.1.1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 4 }