{ "cells": [ { "cell_type": "markdown", "source": [ "# 与PyTorch典型区别\n", "\n", "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/resource/_static/logo_notebook.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.0/zh_cn/migration_guide/mindspore_typical_api_comparision.ipynb)\n", "[![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/resource/_static/logo_download_code.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.0/zh_cn/migration_guide/mindspore_typical_api_comparision.py)\n", "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r2.0/docs/mindspore/source_zh_cn/migration_guide/typical_api_comparision.ipynb)\n", "\n", "## 与PyTorch典型接口区别\n", "\n", "### torch.device\n", "\n", "PyTorch 在构建模型时,通常会利用 torch.device 指定模型和数据绑定的设备,是在 CPU 还是 GPU 上,如果支持多 GPU,还可以指定具体的 GPU 序号。绑定相应的设备后,需要将模型和数据部署到对应设备,代码如下:\n", "\n", "```python\n", "import os\n", "import torch\n", "from torch import nn\n", "\n", "# bind to the GPU 0 if GPU is available, otherwise bind to CPU\n", "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\") # 单 GPU 或者 CPU\n", "# deploy model to specified hardware\n", "model.to(device)\n", "# deploy data to specified hardware\n", "data.to(device)\n", "\n", "# distribute training on multiple GPUs\n", "if torch.cuda.device_count() > 1:\n", " model = nn.DataParallel(model, device_ids=[0,1,2])\n", " model.to(device)\n", "\n", " # set available device\n", " os.environ['CUDA_VISIBLE_DEVICE']='1'\n", " model.cuda()\n", "```" ], "metadata": { "collapsed": false } }, { "cell_type": "markdown", "id": "469cf753", "metadata": {}, "source": [ "而在 MindSpore 中,我们通过 context 中 的 device_target 参数 指定模型绑定的设备,device_id 指定设备的序号。与 PyTorch 不同的是,一旦设备设置成功,输入数据和模型会默认拷贝到指定的设备中执行,不需要也无法再改变数据和模型所运行的设备类型。代码如下:\n", "\n", "```python\n", "import mindspore as ms\n", "ms.set_context(device_target='Ascend', device_id=0)\n", "\n", "# define net\n", "Model = ..\n", "# define dataset\n", "dataset = ..\n", "# training, automatically deploy to Ascend according to device_target\n", "Model.train(1, dataset)\n", "```" ] }, { "cell_type": "markdown", "id": "f19fbd43", "metadata": {}, "source": [ "此外,网络运行后返回的 `Tensor` 默认均拷贝到 CPU 设备,可以直接对该 `Tensor` 进行访问和修改,包括转成 `numpy` 格式,无需像 PyTorch 一样需要先执行 `tensor.cpu` 再转换成 numpy 格式。\n", "\n", "### nn.Module\n", "\n", "使用 PyTorch 构建网络结构时,我们会用到`nn.Module` 类,通常将网络中的元素定义在`__init__` 函数中并对其初始化,将网络的图结构表达定义在`forward` 函数中,通过调用这些类的对象完成整个模型的构建和训练。`nn.Module` 不仅为我们提供了构建图接口,它还为我们提供了一些常用的 [API](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) ,来帮助我们执行更复杂逻辑。\n", "\n", "MindSpore 中的 `nn.Cell` 类发挥着和 PyTorch 中 `nn.Module` 相同的作用,都是用来构建图结构的模块,MindSpore 也同样提供了丰富的 [API](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/nn/mindspore.nn.Cell.html) 供开发者使用,虽然名字不能一一对应,但 `nn.Module` 中常用的功能都可以在`nn.Cell` 中找到映射。\n", "\n", "以几个常用方法为例:\n", "\n", "|常用方法| nn.Module|nn.Cell|\n", "|:----|:----|:----|\n", "|获取子元素|named_children|cells_and_names|\n", "|添加子元素|add_module|insert_child_to_cell|\n", "|获取元素的参数|parameters|get_parameters|\n", "\n", "### 数据对象\n", "\n", "在 PyTorch 中,可以存储数据的对象总共有四种,分别时`Tensor`、`Variable`、`Parameter`、`Buffer`。这四种对象的默认行为均不相同,当我们不需要求梯度时,通常使用 `Tensor`和 `Buffer`两类数据对象,当我们需要求梯度时,通常使用 `Variable` 和 `Parameter` 两类对象。PyTorch 在设计这四种数据对象时,功能上存在冗余(`Variable` 后续会被废弃也说明了这一点)。\n", "\n", "MindSpore 优化了数据对象的设计逻辑,仅保留了两种数据对象:`Tensor` 和 `Parameter`,其中 `Tensor` 对象仅参与运算,并不需要对其进行梯度求导和参数更新,而 `Parameter` 数据对象和 PyTorch 的 `Parameter` 意义相同,会根据其属性`requires_grad` 来决定是否对其进行梯度求导和 参数更新。 在网络迁移时,只要是在 PyTorch 中未进行参数更新的数据对象,均可在 MindSpore 中声明为 `Tensor`。\n", "\n", "### 梯度求导\n", "\n", "梯度求导涉及的算子和接口差异主要是由 MindSpore 和 PyTorch 自动微分原理不同引起的。\n", "\n", "### torch.no_grad\n", "\n", "在 PyTorch 中,默认情况下,执行正向计算时会记录反向传播所需的信息,在推理阶段或无需反向传播网络中,这一操作是冗余的,会额外耗时,因此,PyTorch 提供了`torch.no_grad` 来取消该过程。\n", "\n", "而 MindSpore 只有在调用`grad`才会根据正向图结构来构建反向图,正向执行时不会记录任何信息,所以 MindSpore 并不需要该接口,也可以理解为 MindSpore 的正向计算均在`torch.no_grad` 情况下进行的。\n", "\n", "### retain_graph\n", "\n", "由于 PyTorch 是基于函数式的自动微分,所以默认每次执行完反向传播后都会自动清除记录的信息,从而进行下一次迭代。这就会导致当我们想再次利用这些反向图和梯度信息时,由于已被删除而获取失败。因此,PyTorch 提供了`backward(retain_graph=True)` 来主动保留这些信息。\n", "\n", "而 MindSpore 则不需要这个功能,MindSpore 是基于计算图的自动微分,反向图信息在调用`grad`后便永久的记录在计算图中,只要再次调用计算图就可以获取梯度信息。\n", "\n", "### 高阶导数\n", "\n", "基于计算图的自动微分还有一个好处,我们可以很方便的实现高阶求导。第一次对正向图执行`grad` 操作后,我们可以得到一阶导,此时计算图被更新为正向图+一阶导的反向图结构,但我们再次对更新后的计算图执行 `grad`后,我们就可以得到二阶导,以此类推,通过基于计算图的自动微分,我们很容易求得一个网络的高阶导数。\n", "\n", "## 与PyTorch典型算子区别\n", "\n", "MindSpore 的大部分算子 API 和 TensorFlow 相近,但也有一部分算子的默认行为与 PyTorch 或 TensorFlow 存在差异。开发者在做网络脚本迁移时,如果未注意到这些差异,直接使用默认行为,很容易出现与原迁移网络不一致的情况,影响网络训练,我们建议开发者在迁移网络时不仅对齐使用的算子,也要对齐算子的属性。这里我们总结了几个常见的差异算子。\n", "\n", "### nn.Dropout\n", "\n", "Dropout 常用于防止训练过拟合,有一个重要的 **概率值** 参数,该参数在 MindSpore 中的意义与 PyTorch 和 TensorFlow 中的意义完全相反。\n", "\n", "在 MindSpore 中,概率值对应 Dropout 算子的属性 `keep_prob`,是指输入被保留的概率,`1-keep_prob`是指输入被置 0 的概率。\n", "\n", "在 PyTorch 和 TensorFlow 中,概率值分别对应 Dropout 算子的属性 `p`和 `rate`,是指输入被置 0 的概率,与 MindSpore.nn.Dropout 中的 `keep_prob` 意义相反。\n", "\n", "更多信息请参考链接: [MindSpore Dropout](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/nn/mindspore.nn.Dropout.html#mindspore.nn.Dropout) 、 [PyTorch Dropout](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html) 、 [TensorFlow Dropout](https://www.tensorflow.org/api_docs/python/tf/nn/dropout)\n", "\n", "### nn.BatchNorm2d\n", "\n", "BatchNorm 是 CV 领域比较特殊的正则化方法,它在训练和推理的过程中有着不同计算流程,通常由算子属性控制。MindSpore 和 PyTorch 的 BatchNorm 在这一点上使用了两种不同的参数组。\n", "\n", "- 差异一\n", "\n", "`torch.nn.BatchNorm2d` 在不同参数下的状态\n", "\n", "|training|track_running_stats|状态|\n", "|:----|:----|:--------------------------------------|\n", "|True|True|期望中训练的状态,running_mean 和 running_var 会跟踪整个训练过程中 batch 的统计特性,而每组输入数据用当前 batch 的 mean 和 var 统计特性做归一化,然后再更新 running_mean 和 running_var。|\n", "|True|False|每组输入数据会根据当前 batch 的统计特性做归一化,但不会有 running_mean 和 running_var 参数了。|\n", "|False|True|期望中推理的状态,BN 使用 running_mean 和 running_var 做归一化,并且不会对其进行更新。|\n", "|False|False|效果同第二点,只不过处于推理状态,不会学习 weight 和 bias 两个参数。一般不采用该状态。|\n", "\n", "`mindspore.nn.BatchNorm2d` 在不同参数下的状态\n", "\n", "|use_batch_statistics|状态|\n", "|:----|:--------------------------------------|\n", "|True|期望中训练的状态,moving_mean 和 moving_var 会跟踪整个训练过程中 batch 的统计特性,而每组输入数据用当前 batch 的 mean 和 var 统计特性做归一化,然后再更新 moving_mean 和 moving_var。\n", "|Fasle|期望中推理的状态,BN 使用 moving_mean 和 moving_var 做归一化,并且不会对其进行更新。\n", "|None|自动设置 use_batch_statistics。如果是训练,use_batch_statistics=True,如果是推理,use_batch_statistics=False。\n", "\n", "通过比较可以发现,`mindspore.nn.BatchNorm2d` 相比 `torch.nn.BatchNorm2d`,少了两种冗余的状态,仅保留了最常用的训练和推理两种状态。\n", "\n", "- 差异二\n", "\n", "BatchNorm系列算子 的 momentum 参数在 MindSpore 和 PyTorch 表示的意义相反,关系为:\n", "\n", "$$momentum_{pytorch} = 1 - momentum_{mindspore}$$\n", "\n", "参考链接:[mindspore.nn.BatchNorm2d](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/nn/mindspore.nn.BatchNorm2d.html),[torch.nn.BatchNorm2d](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html)\n", "\n", "### ops.Transpose\n", "\n", "在做轴变换时,PyTorch 常用到两种算子`Tensor.permute` 和 `torch.transpose`,而 MindSpore 和 TensorFlow 仅提供了 `transpose` 算子。需要注意的是,`Tensor.permute` 包含了 `torch.transpose` 的功能,`torch.transpose` 仅支持同时交换两个轴,而 `Tensor.permute` 可以同时变换多个轴。" ] }, { "cell_type": "code", "execution_count": 1, "id": "6df02a42", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:08:24.059525Z", "start_time": "2022-09-14T08:08:22.839004Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([4, 2, 3, 1])\n", "torch.Size([4, 3, 2, 1])\n" ] } ], "source": [ "# PyTorch 代码\n", "import numpy as np\n", "import torch\n", "from torch import Tensor\n", "data = np.empty((1, 2, 3, 4)).astype(np.float32)\n", "ret1 = torch.transpose(Tensor(data), dim0=0, dim1=3)\n", "print(ret1.shape)\n", "ret2 = Tensor(data).permute((3, 2, 1, 0))\n", "print(ret2.shape)" ] }, { "cell_type": "markdown", "id": "d52fc015", "metadata": {}, "source": [ "MindSpore 和 TensorFlow 的 `transpose`算子功能相同, 虽然名字叫 `transpose`,但实际可以做多个轴的同时变换,等价于 `Tensor.permute`。因此,MindSpore 也不再单独提供类似 `torch.tranpose` 的算子。" ] }, { "cell_type": "code", "execution_count": 2, "id": "ef2b22c9", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:08:27.204994Z", "start_time": "2022-09-14T08:08:24.061577Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(4, 3, 2, 1)\n" ] } ], "source": [ "# mindspore 代码\n", "import mindspore as ms\n", "import numpy as np\n", "from mindspore import ops\n", "ms.set_context(device_target=\"CPU\")\n", "data = np.empty((1, 2, 3, 4)).astype(np.float32)\n", "ret = ops.Transpose()(ms.Tensor(data), (3, 2, 1, 0))\n", "print(ret.shape)" ] }, { "cell_type": "markdown", "id": "36b077ca", "metadata": {}, "source": [ "更多信息请参考链接:[MindSpore Transpose](https://www.mindspore.cn/docs/en/r2.0/api_python/ops/mindspore.ops.Transpose.html#mindspore.ops.Transpose) 、 [PyTorch Transpose](https://pytorch.org/docs/stable/generated/torch.transpose.html) 、[PyTorch Permute](https://pytorch.org/docs/stable/generated/torch.Tensor.permute.html) 、 [TensforFlow Transpose](https://www.tensorflow.org/api_docs/python/tf/transpose)\n", "\n", "### Conv 和 Pooling\n", "\n", "对于类似卷积和池化的算子,我们知道算子的输出特征图大小依赖输入特征图、步长、kernel_size 和 padding 等变量。\n", "\n", "如果 `pad_mode` 设置为 `valid`,则输出特征图高和宽的计算公式分别为\n", "\n", "![conv-formula](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/docs/mindspore/source_zh_cn/migration_guide/model_development/images/conv_formula.png)\n", "\n", "如果 pad_mode(对应 PyTorch 中的属性为 padding,与属性 pad_mode 含义并不相同) 设置为 `same` 时,有时需要对输入特征图进行自动的 padding 操作,当padding 的元素为偶数时,padding 的元素会均匀分布在特征图的上下左右,此时 MindSpore、PyTorch 和 TensorFlow 中该类算子行为一致。\n", "\n", "但当 padding 的元素为奇数时,PyTorch 会优先填充在输入特征图的左侧和上侧:\n", "\n", "![padding1](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/docs/mindspore/source_zh_cn/migration_guide/model_development/images/padding_pattern1.png)\n", "\n", "而 MindSpore 和 TensorFlow 则优先填充在特征图的右侧和下侧:\n", "\n", "![padding2](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/docs/mindspore/source_zh_cn/migration_guide/model_development/images/padding_pattern2.png)\n", "\n", "举例说明:" ] }, { "cell_type": "code", "execution_count": 3, "id": "7813ac99", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:08:27.218755Z", "start_time": "2022-09-14T08:08:27.207017Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[[[0. 1. 2.]\n", " [3. 4. 5.]\n", " [6. 7. 8.]]]]\n", "[[[[4. 5.]\n", " [7. 8.]]]]\n" ] } ], "source": [ "# mindspore example\n", "import numpy as np\n", "import mindspore as ms\n", "from mindspore import ops\n", "ms.set_context(device_target=\"CPU\")\n", "data = np.arange(9).reshape(1, 1, 3, 3).astype(np.float32)\n", "print(data)\n", "op = ops.MaxPool(kernel_size=2, strides=2, pad_mode='same')\n", "print(op(ms.Tensor(data)))" ] }, { "cell_type": "markdown", "id": "f88cf463", "metadata": {}, "source": [ "在做 MindSpore 模型迁移时,如果模型加载了 PyTorch 的预训练模型,而之后又在 MindSpore 进行 finetune,则该差异可能会导致精度下降,对于 padding 策略为 same 的卷积,开发者需要特别注意。\n", "\n", "如果想和 PyTorch 行为保持一致,可以先使用 `ops.Pad` 算子手动 pad 元素,然后使用 `pad_mode=\"valid\"` 的卷积和池化操作。" ] }, { "cell_type": "code", "execution_count": 4, "id": "4d34b078", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:08:27.233022Z", "start_time": "2022-09-14T08:08:27.221335Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[[[0. 2.]\n", " [6. 8.]]]]\n" ] } ], "source": [ "import numpy as np\n", "import mindspore as ms\n", "from mindspore import ops\n", "ms.set_context(device_target=\"CPU\")\n", "data = np.arange(9).reshape(1, 1, 3, 3).astype(np.float32)\n", "# only padding on top left of feature map\n", "pad = ops.Pad(((0, 0), (0, 0), (1, 0), (1, 0)))\n", "data = pad(ms.Tensor(data))\n", "res = ops.MaxPool(kernel_size=2, strides=2, pad_mode='vaild')(data)\n", "print(res)" ] }, { "cell_type": "markdown", "id": "b23e40fb", "metadata": {}, "source": [ "### 默认权重初始化不同\n", "\n", "我们知道权重初始化对网络的训练十分重要。每个算子一般会有一个隐式的声明权重,在不同的框架中,隐式的声明权重可能不同。即使算子功能一致,隐式声明的权重初始化方式分布如果不同,也会对训练过程产生影响,甚至无法收敛。\n", "\n", "常见隐式声明权重的算子:Conv、Dense(Linear)、Embedding、LSTM 等,其中区别较大的是 Conv 类和 Dense 两种算子。\n", "\n", "- Conv2d\n", "\n", " - mindspore.nn.Conv2d的weight为:$\\mathcal{N}(0, 1)$,bias为:zeros。\n", " - torch.nn.Conv2d的weight为:$\\mathcal{U} (-\\sqrt{k},\\sqrt{k} )$,bias为:$\\mathcal{U} (-\\sqrt{k},\\sqrt{k} )$。\n", " - tf.keras.Layers.Conv2D的weight为:glorot_uniform,bias为:zeros。\n", "\n", " 其中,$k=\\frac{groups}{c_{in}*\\prod_{i}^{}{kernel\\_size[i]}}$\n", "\n", "- Dense(Linear)\n", "\n", " - mindspore.nn.Linear的weight为:$\\mathcal{N}(0, 1)$,bias为:zeros。\n", " - torch.nn.Dense的weight为:$\\mathcal{U}(-\\sqrt{k},\\sqrt{k})$,bias为:$\\mathcal{U}(-\\sqrt{k},\\sqrt{k} )$。\n", " - tf.keras.Layers.Dense的weight为:glorot_uniform,bias为:zeros。\n", "\n", " 其中,$k=\\frac{groups}{in\\_features}$\n", "\n", "对于没有正则化的网络,如没有 BatchNorm 算子的 GAN 网络,梯度很容易爆炸或者消失,权重初始化就显得十分重要,各位开发者应注意权重初始化带来的影响。\n", "\n", "## 与PyTorch执行流程区别\n", "\n", "### 自动微分\n", "\n", "MindSpore 和 PyTorch 都提供了自动微分功能,让我们在定义了正向网络后,可以通过简单的接口调用实现自动反向传播以及梯度更新。但需要注意的是,MindSpore 和 PyTorch 构建反向图的逻辑是不同的,这个差异也会带来 API 设计上的不同。\n", "\n", "#### PyTorch 的自动微分\n", "\n", "我们知道 PyTorch 是基于计算路径追踪的自动微分,当我们定义一个网络结构后, 并不会建立反向图,而是在执行正向图的过程中,`Variable` 或 `Parameter` 记录每一个正向计算对应的反向函数,并生成一个动态计算图,用于后续的梯度计算。当在最终的输出处调用 `backward` 时,就会从根节点到叶节点应用链式法则计算梯度。PyTorch 的动态计算图所存储的节点实际是 `Function` 函数对象,每当对 `Tensor` 执行一步运算后,就会产生一个 `Function` 对象,它记录了反向传播中必要的信息。反向传播过程中,`autograd` 引擎会按照逆序,通过 `Function` 的 `backward` 依次计算梯度。 这一点我们可以通过 `Tensor` 的隐藏属性查看。\n", "\n", "例如,运行以下代码:" ] }, { "cell_type": "code", "execution_count": 5, "id": "b095b795", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:08:27.243218Z", "start_time": "2022-09-14T08:08:27.235064Z" } }, "outputs": [], "source": [ "import torch\n", "from torch.autograd import Variable\n", "x = Variable(torch.ones(2, 2), requires_grad=True)\n", "x = x * 2\n", "y = x - 1\n", "y.backward(x)" ] }, { "cell_type": "markdown", "id": "cf607fcf", "metadata": {}, "source": [ "会自动获取`x`的定义到获取输出`y`这个过程对x的梯度结果。\n", "\n", "需要注意的是PyTorch的backward是累计的,更新完之后需要清空optimizer。\n", "\n", "#### MindSpore的自动微分\n", "\n", "在图模式下,MindSpore 的自动微分是基于图结构的微分,和 PyTorch 不同,它不会在正向计算过程中记录任何信息,仅仅执行正常的计算流程(在PyNative模式下和 PyTorch 类似)。那么问题来了,如果整个正向计算都结束了,MindSpore 也没有记录任何信息,那它是如何知道反向传播怎么执行的呢?\n", "\n", "MindSpore 在做自动微分时,需要传入正向图结构,自动微分的过程就是通过对正向图的分析从而得到反向传播信息,自动微分的结果与正向计算中具体的数值无关,仅和正向图结构有关。通过对正向图的自动微分,我们得到了反向传播过程,而这个反向传播过程其实也是通过一个图结构来表达,也就是反向图。将反向图添加到用户定义的正向图之后,组成一个最终的计算图。不过后添加的反向图和其中的反向算子我们并不感知,也无法手动添加,只能通过 MindSpore 为我们提供的接口自动添加,这样做也避免了我们在反向构图时引入错误。\n", "\n", "最终,我们看似仅执行了正向图,其实图结构里既包含了正向算子,又包含了 MindSpore 为我们添加的反向算子,也就是说,MindSpore 在我们定义的正向图后面又新加了一个看不见的 `Cell`,这个 `Cell` 里都是根据正向图推导出来的反向算子。\n", "\n", "而这个帮助我们构建反向图的接口就是 [grad](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/mindspore/mindspore.grad.html) :" ] }, { "cell_type": "code", "execution_count": 6, "id": "9bff9c56", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:08:27.254574Z", "start_time": "2022-09-14T08:08:27.245853Z" } }, "outputs": [], "source": [ "import mindspore as ms\n", "from mindspore import nn, ops\n", "\n", "class GradNetWrtX(nn.Cell):\n", " def __init__(self, net):\n", " super(GradNetWrtX, self).__init__()\n", " self.net = net\n", "\n", " def construct(self, x, y):\n", " gradient_function = ms.grad(self.net)\n", " return gradient_function(x, y)" ] }, { "cell_type": "markdown", "id": "e870f851", "metadata": {}, "source": [ "查看文档介绍我们可以发现,grad 并不是一个算子,它的输入输出并不是 Tensor,而是我们定义的正向图和自动微分得到的反向图。为什么输入是一个图结构呢?因为构建反向图并不需要知道具体的输入数据是什么,只要知道正向图的结构就行了,有了正向图就可以推算出反向图结构,之后我们可以把正向图+反向图当成一个新的计算图来对待,这个新的计算图就像是一个函数,对于你输入的任何一组数据,它不仅能计算出正向的输出,还能计算出所有权重的梯度,由于图结构是固定的,并不保存中间变量,所以这个图结构可以被反复调用。\n", "\n", "同理,之后我们再给网络加上优化器结构时,优化器也会加上优化器相关的算子,也就是再给这个计算图加点我们不感知的优化器算子,最终,计算图就构建完成。\n", "\n", "在 MindSpore 中,大部分操作都会最终转换成真实的算子操作,最终加入到计算图中,因此,我们实际执行的计算图中算子的数量远多于我们最开始定义的计算图中算子的数量。\n", "\n", "在MindSpore中,提供了[TrainOneStepCell](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/nn/mindspore.nn.TrainOneStepCell.html)和[TrainOneStepWithLossScaleCell](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/nn/mindspore.nn.TrainOneStepWithLossScaleCell.html)这两个接口来包装整个训练流程,如果在常规的训练流程外有其他的操作,如梯度裁剪、规约、中间变量返回等,需要自定义训练的Cell,详情请参考[训练及推理流程](https://www.mindspore.cn/docs/zh-CN/r2.0/migration_guide/model_development/training_and_evaluation_procession.html)。\n", "\n", "### 学习率更新\n", "\n", "#### PyTorch的学习率更新策略\n", "\n", "PyTorch提供了`torch.optim.lr_scheduler`包用于动态修改lr,使用的时候需要显式地调用`optimizer.step()`和`scheduler.step()`来更新lr,详情请参考[如何调整学习率](https://pytorch.org/docs/1.12/optim.html#how-to-adjust-learning-rate)。\n", "\n", "#### MindSpore的学习率更新策略\n", "\n", "MindSpore的学习率是包到优化器里面的,每调用一次优化器,学习率更新的step会自动更新一次,详情请参考[学习率与优化器](https://www.mindspore.cn/docs/zh-CN/r2.0/migration_guide/model_development/learning_rate_and_optimizer.html)。\n", "\n", "### 分布式场景\n", "\n", "#### PyTorch的分布式设置\n", "\n", "一般分布式场景是数据并行就行,详情请参考[DDP](https://pytorch.org/docs/1.12/notes/ddp.html),下面是PyTorch的一个例子:\n", "\n", "```python\n", "import os\n", "import torch\n", "import torch.distributed as dist\n", "import torch.multiprocessing as mp\n", "import torch.nn as nn\n", "import torch.optim as optim\n", "from torch.nn.parallel import DistributedDataParallel as DDP\n", "\n", "\n", "def example(rank, world_size):\n", " # create default process group\n", " dist.init_process_group(\"gloo\", rank=rank, world_size=world_size)\n", " # create local model\n", " model = nn.Linear(10, 10).to(rank)\n", " # construct DDP model\n", " ddp_model = DDP(model, device_ids=[rank])\n", " # define loss function and optimizer\n", " loss_fn = nn.MSELoss()\n", " optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)\n", "\n", " # forward pass\n", " outputs = ddp_model(torch.randn(20, 10).to(rank))\n", " labels = torch.randn(20, 10).to(rank)\n", " # backward pass\n", " loss_fn(outputs, labels).backward()\n", " # update parameters\n", " optimizer.step()\n", "\n", "def main():\n", " world_size = 2\n", " mp.spawn(example,\n", " args=(world_size,),\n", " nprocs=world_size,\n", " join=True)\n", "\n", "if __name__==\"__main__\":\n", " # Environment variables which need to be\n", " # set when using c10d's default \"env\"\n", " # initialization mode.\n", " os.environ[\"MASTER_ADDR\"] = \"localhost\"\n", " os.environ[\"MASTER_PORT\"] = \"29500\"\n", " main()\n", "```" ] }, { "cell_type": "markdown", "id": "cbb360a9", "metadata": {}, "source": [ "#### MindSpore的分布式设置\n", "\n", "MindSpore的分布式配置是使用运行时配置项的,详情请参考[分布式并行](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.0/parallel/introduction.html),如:\n", "\n", "```python\n", "import mindspore as ms\n", "from mindspore.communication.management import init, get_rank, get_group_size\n", "\n", "# 初始化多卡环境\n", "init()\n", "\n", "# 获取组成分布式场景的卡数和当前进程的逻辑id\n", "group_size = get_group_size()\n", "rank_id = get_rank()\n", "\n", "# 配置数据并行模式运行分布式场景\n", "ms.set_auto_parallel_context(parallel_mode=ms.ParallelMode.DATA_PARALLEL, gradients_mean=True)\n", "```" ] }, { "cell_type": "markdown", "source": [ "## 与PyTorch优化器的区别\n", "\n", "### 优化器支持差异\n", "\n", "PyTorch和MindSpore同时支持的优化器异同比较详见[API映射表](https://mindspore.cn/docs/zh-CN/r2.0/note/api_mapping/pytorch_api_mapping.html#torch-optim)。MindSpore暂不支持的优化器:LBFGS,NAdam,RAdam。\n", "\n", "### 优化器的执行和使用差异\n", "\n", "PyTorch单步执行优化器时,一般需要手动执行 `zero_grad()` 方法将历史梯度设置为0(或None),然后使用 `loss.backward()` 计算当前训练step的梯度,最后调用优化器的 `step()` 方法实现网络权重的更新;\n", "\n", "```python\n", "optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)\n", "scheduler = ExponentialLR(optimizer, gamma=0.9)\n", "\n", "for epoch in range(20):\n", " for input, target in dataset:\n", " optimizer.zero_grad()\n", " output = model(input)\n", " loss = loss_fn(output, target)\n", " loss.backward()\n", " optimizer.step()\n", " scheduler.step()\n", "```\n", "\n", "MindSpore中优化器的使用,只需要直接对梯度进行计算,然后使用 `optimizer(grads)` 执行网络权重的更新。\n", "\n", "```python\n", "import mindspore\n", "from mindspore import nn\n", "\n", "optimizer = nn.SGD(model.trainable_params(), learning_rate=0.01)\n", "grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)\n", "def train_step(data, label):\n", " (loss, _), grads = grad_fn(data, label)\n", " optimizer(grads)\n", " return loss\n", "```\n", "\n", "### 超参差异\n", "\n", "#### 超参名称\n", "\n", "网络权重和学习率入参名称异同:\n", "\n", "| 参数 | PyTorch | MindSpore | 差异 |\n", "|------|---------| --------- |-------|\n", "| 网络权重 | params | params | 参数名相同 |\n", "| 学习率 | lr | learning_rate | 参数名不同 |\n", "\n", "MindSpore:\n", "\n", "```python\n", "from mindspore import nn\n", "\n", "optimizer = nn.SGD(model.trainable_params(), learning_rate=0.01)\n", "```\n", "\n", "PyTorch:\n", "\n", "```python\n", "from torch import optim\n", "\n", "optimizer = optim.SGD(model.parameters(), lr=0.01)\n", "```\n", "\n", "#### 超参配置方式\n", "\n", "- 参数不分组:\n", "\n", " `params` 入参支持类型不同: PyTorch入参类型为 `iterable(Tensor)` 和 `iterable(dict)`,支持迭代器类型;\n", "MindSpore入参类型为 `list(Parameter)`,`list(dict)`,不支持迭代器。\n", "\n", " 其他超参配置及支持差异详见[API映射表](https://mindspore.cn/docs/zh-CN/r2.0/note/api_mapping/pytorch_api_mapping.html#torch-optim)。\n", "\n", "- 参数分组:\n", "\n", " PyTorch支持所有参数分组:\n", "\n", " ```python\n", " optim.SGD([\n", " {'params': model.base.parameters()},\n", " {'params': model.classifier.parameters(), 'lr': 1e-3}\n", " ], lr=1e-2, momentum=0.9)\n", " ```\n", "\n", " MindSpore仅支持特定key分组:\"params\",\"lr\",\"weight_decay\",\"grad_centralization\",\"order_params\"。\n", "\n", " ```python\n", " conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))\n", " no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))\n", " group_params = [{'params': conv_params, 'weight_decay': 0.01, 'lr': 0.02},\n", " {'params': no_conv_params}]\n", "\n", " optim = nn.Momentum(group_params, learning_rate=0.1, momentum=0.9)\n", " ```\n", "\n", "#### 运行时超参修改\n", "\n", "PyTorch支持在训练过程中修改任意的优化器参数,并提供了 `LRScheduler` 用于动态修改学习率;\n", "\n", "MindSpore当前不支持训练过程中修改优化器参数,但提供了修改学习率和权重衰减的方式,使用方式详见[学习率](#学习率)和[权重衰减](#权重衰减)章节。\n", "\n", "### 学习率\n", "\n", "#### 动态学习率差异\n", "\n", "PyTorch中定义了 `LRScheduler` 类用于对学习率进行管理。使用动态学习率时,将 `optimizer` 实例传入 `LRScheduler` 子类中,通过循环调用 `scheduler.step()` 执行学习率修改,并将修改同步至优化器中。\n", "\n", "```python\n", "optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)\n", "scheduler = ExponentialLR(optimizer, gamma=0.9)\n", "\n", "for epoch in range(20):\n", " for input, target in dataset:\n", " optimizer.zero_grad()\n", " output = model(input)\n", " loss = loss_fn(output, target)\n", " loss.backward()\n", " optimizer.step()\n", " scheduler.step()\n", "```\n", "\n", "MindSpore中的动态学习率有 `Cell` 和 `list` 两种实现方式,两种类型的动态学习率使用方式一致,都是在实例化完成之后传入优化器,前者在内部的 `construct` 中进行每一步学习率的计算,后者直接按照计算逻辑预生成学习率列表,训练过程中内部实现学习率的更新。具体请参考[动态学习率](https://mindspore.cn/docs/zh-CN/r2.0/api_python/mindspore.nn.html#%E5%8A%A8%E6%80%81%E5%AD%A6%E4%B9%A0%E7%8E%87)。\n", "\n", "```python\n", "polynomial_decay_lr = nn.PolynomialDecayLR(learning_rate=0.1, end_learning_rate=0.01, decay_steps=4, power=0.5)\n", "optim = nn.Momentum(params, learning_rate=polynomial_decay_lr, momentum=0.9, weight_decay=0.0)\n", "\n", "grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)\n", "def train_step(data, label):\n", " (loss, _), grads = grad_fn(data, label)\n", " optimizer(grads)\n", " return loss\n", "```\n", "\n", "#### 自定义学习率差异\n", "\n", "PyTorch的动态学习率模块 `LRScheduler` 提供了`LambdaLR` 接口供用户自定义学习率调整规则,用户通过传入lambda表达式或自定义函数实现学习率指定。\n", "\n", "```python\n", "optimizer = optim.SGD(model.parameters(), lr=0.01)\n", "lbd = lambda epoch: epoch // 5\n", "scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lbd)\n", "\n", "for epoch in range(20):\n", " train(...)\n", " validate(...)\n", " scheduler.step()\n", "```\n", "\n", "MindSpore未提供类似的lambda接口,自定义学习率调整策略可以通过自定义函数或自定义 `LearningRateSchedule` 来实现。\n", "\n", "方式一:定义python函数指定计算逻辑,返回学习率列表:\n", "\n", "```python\n", "def dynamic_lr(lr, total_step, step_per_epoch):\n", " lrs = []\n", " for i in range(total_step):\n", " current_epoch = i // step_per_epoch\n", " factor = current_epoch // 5\n", " lrs.append(lr * factor)\n", " return lrs\n", "\n", "decay_lr = dynamic_lr(lr=0.01, total_step=200, step_per_epoch=10)\n", "optim = nn.SGD(params, learning_rate=decay_lr)\n", "```\n", "\n", "方式二:继承 `LearningRateSchedule`,在 `construct` 方法中定义变化策略:\n", "\n", "```python\n", "class DynamicDecayLR(LearningRateSchedule):\n", " def __init__(self, lr, step_per_epoch):\n", " super(DynamicDecayLR, self).__init__()\n", " self.lr = lr\n", " self.step_per_epoch = step_per_epoch\n", " self.cast = P.Cast()\n", "\n", " def construct(self, global_step):\n", " current_epoch = self.cast(global_step, mstype.float32) // step_per_epoch\n", " return self.learning_rate * (current_epoch // 5)\n", "\n", "decay_lr = DynamicDecayLR(lr=0.01, step_per_epoch=10)\n", "optim = nn.SGD(params, learning_rate=decay_lr)\n", "```\n", "\n", "#### 学习率获取\n", "\n", "PyTorch:\n", "\n", "- 固定学习率情况下,通常通过 `optimizer.state_dict()` 进行学习率的查看和打印,例如参数分组时,对于第n个参数组,使用 `optimizer.state_dict()['param_groups'][n]['lr']`,参数不分组时,使用 `optimizer.state_dict()['param_groups'][0]['lr']`;\n", "\n", "- 动态学习率情况下,可以使用 `LRScheduler` 的 `get_lr` 方法获取当前学习率,或使用 `print_lr` 方法打印学习率。\n", "\n", "MindSpore:\n", "\n", "- 目前未提供直接查看学习率的接口,后续版本中会针对此问题进行修复。\n", "\n", "### 权重衰减\n", "\n", "PyTorch中修改 `weight_decay`:\n", "\n", "```python\n", "from torch.nn import optim\n", "\n", "optimizer = optim.SGD(param_groups, lr=0.01, weight_decay=0.1)\n", "decay_factor = 0.1\n", "def train_step(data, label):\n", " optimizer.zero_grad()\n", " output = model(data)\n", " loss = loss_fn(output, label)\n", " loss.backward()\n", " optimizer.step()\n", " for param_group in optimizer.param_groups:\n", " param_group[\"weight_decay\"] *= decay_factor\n", "```\n", "\n", "MindSpore中实现动态weight decay:用户可以继承 `Cell` 自定义动态weight decay的类,传入优化器中。\n", "\n", "```python\n", "class ExponentialWeightDecay(Cell):\n", "\n", " def __init__(self, weight_decay, decay_rate, decay_steps):\n", " super(ExponentialWeightDecay, self).__init__()\n", " self.weight_decay = weight_decay\n", " self.decay_rate = decay_rate\n", " self.decay_steps = decay_steps\n", "\n", " def construct(self, global_step):\n", " p = global_step / self.decay_steps\n", " return self.weight_decay * ops.pow(self.decay_rate, p)\n", "\n", "weight_decay = ExponentialWeightDecay(weight_decay=0.1, decay_rate=0.1, decay_steps=10000)\n", "optimizer = nn.SGD(net.trainable_params(), weight_decay=weight_decay)\n", "```\n", "\n", "### 优化器状态的保存与加载\n", "\n", "PyTorch的优化器模块提供了 `state_dict()` 用于优化器状态的查看及保存,`load_state_dict` 用于优化器状态的加载。\n", "\n", "- 优化器保存,可以使用 `torch.save()` 把获取到的 `state_dict` 保存到pkl文件中:\n", "\n", " ```python\n", " optimizer = optim.SGD(param_groups, lr=0.01)\n", " torch.save(optimizer.state_dict(), save_path)\n", " ```\n", "\n", "- 优化器加载,可以使用 `torch.load()` 加载保存的 `state_dict`,然后使用 `load_state_dict` 将获取到的 `state_dict` 加载到优化器中:\n", "\n", " ```python\n", " optimizer = optim.SGD(param_groups, lr=0.01)\n", " state_dict = torch.load(save_path)\n", " optimizer.load_state_dict(state_dict)\n", " ```\n", "\n", "MindSpore的优化器模块继承自 `Cell`,优化器的保存与加载和网络的保存与加载方式相同,通常情况下配合 `save_checkpoint` 与`load_checkpoint` 使用。\n", "\n", "- 优化器保存,可以使用 `mindspore.save_checkpoint()` 将优化器实例保存到ckpt文件中:\n", "\n", " ```python\n", " optimizer = nn.SGD(param_groups, lr=0.01)\n", " state_dict = mindspore.save_checkpoint(opt, save_path)\n", " ```\n", "\n", "- 优化器加载,可以使用 `mindspore.load_checkpoint()` 加载保存的ckpt文件,然后使用 `load_param_into_net` 将获取到的 `param_dict` 加载到优化器中:\n", "\n", " ```python\n", " optimizer = nn.SGD(param_groups, lr=0.01)\n", " param_dict = mindspore.load_checkpoint(save_path)\n", " mindspore.load_param_into_net(opt, param_dict)\n", " ```" ], "metadata": { "collapsed": false } }, { "cell_type": "markdown", "source": [ "## 与PyTorch随机数策略的区别\n", "\n", "### API名称\n", "\n", "PyTorch与MindSpore在接口名称上无差异,MindSpore由于不支持原地修改,所以缺少Tensor.random_接口。其余接口均可和PyTorch一一对应。\n", "\n", "### 随机种子和生成器\n", "\n", "MindSpore使用seed控制随机数的生成,而PyTorch使用torch.Generator进行随机数的控制。\n", "\n", "1. MindSpore的seed分为两个等级,graph-level和op-level。graph-level下seed作为全局变量,绝大多数情况下无需用户设置,用户只需调整op-level\n", " seed。(API中涉及的seed参数,均为op-level)\n", " 如果一段程序中两次使用了同一个随机数算法,那么两次的结果是不同的(尽管设置了相同的随机种子);如果重新运行脚本,那么第二次运行的结果应该与第一次保持一致。示例如下:\n", "\n", " ```python\n", " # If a random op is called twice within one program, the two results will be different:\n", " import mindspore as ms\n", " from mindspore import Tensor, ops\n", "\n", " minval = Tensor(1.0, ms.float32)\n", " maxval = Tensor(2.0, ms.float32)\n", " print(ops.uniform((1, 4), minval, maxval, seed=1)) # generates 'A1'\n", " print(ops.uniform((1, 4), minval, maxval, seed=1)) # generates 'A2'\n", " # If the same program runs again, it repeat the results:\n", " print(ops.uniform((1, 4), minval, maxval, seed=1)) # generates 'A1'\n", " print(ops.uniform((1, 4), minval, maxval, seed=1)) # generates 'A2'\n", " ```" ], "metadata": { "collapsed": false } }, { "cell_type": "markdown", "source": [ "2. torch.Generator常在函数中作为关键字参数传入。在未指定/实例化Generator时,会使用默认Generator (torch.default_generator)。可以使用以下代码设置指定的torch.Generator的seed:\n", "\n", " ```python\n", " G = torch.Generator()\n", " G.manual_seed(1)\n", " ```\n", "\n", " 此时和使用default_generator并将seed设置为1的结果相同。例如torch.manual_seed(1)。\n", "\n", " PyTorch的Generator中的state表示的是此Generator的状态,长度为5056,dtype为uint8的Tensor。在同一个脚本中,多次使用同一个Generator,Generator的state会发生改变。在有两个/多个Generator的情况下,如g1,g2,可以设置 g2.set_state(g1.get_state()) 使得g2达到和g1相同的状态。即使用g2相当于使用当时状态的g1。如果g1和g2具有相同的seed和state,则二者生成的随机数相同。" ], "metadata": { "collapsed": false } } ], "metadata": { "kernelspec": { "display_name": "MindSpore", "language": "python", "name": "mindspore" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "vscode": { "interpreter": { "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1" } } }, "nbformat": 4, "nbformat_minor": 5 }