{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 损失函数\n", "\n", "`Ascend` `GPU` `CPU` `模型开发`\n", "\n", "[![在线运行](https://gitee.com/mindspore/docs/raw/r1.6/resource/_static/logo_modelarts.png)](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvcHJvZ3JhbW1pbmdfZ3VpZGUvemhfY24vbWluZHNwb3JlX2xvc3MuaXB5bmI=&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) [![下载Notebook](https://gitee.com/mindspore/docs/raw/r1.6/resource/_static/logo_notebook.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r1.6/programming_guide/zh_cn/mindspore_loss.ipynb) [![下载样例代码](https://gitee.com/mindspore/docs/raw/r1.6/resource/_static/logo_download_code.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r1.6/programming_guide/zh_cn/mindspore_loss.py) [![查看源文件](https://gitee.com/mindspore/docs/raw/r1.6/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r1.6/docs/mindspore/programming_guide/source_zh_cn/loss.ipynb)\n", "\n", "## 概述\n", "\n", "损失函数,又叫目标函数,用于衡量预测值与真实值差异的程度。在深度学习中,模型训练就是通过不停地迭代来缩小损失函数值的过程。因此,在模型训练过程中损失函数的选择非常重要,定义一个好的损失函数,可以有效提高模型的性能。\n", "\n", "MindSpore提供了许多通用损失函数供用户选择,但这些通用损失函数并不适用于所有场景,很多情况需要用户自定义所需的损失函数。因此,本教程介绍损失函数的写作方法。\n", "\n", "目前MindSpore主要支持的损失函数有`L1Loss`、`MSELoss`、`SmoothL1Loss`、`SoftmaxCrossEntropyWithLogits`、`SampledSoftmaxLoss`、`BCELoss`和`CosineEmbeddingLoss`。\n", "\n", "MindSpore的损失函数全部是`Cell`的子类实现,所以也支持用户自定义损失函数,其构造方法在[定义损失函数](#定义损失函数)中进行介绍。\n", "\n", "## 内置损失函数\n", "\n", "- L1Loss\n", "\n", " 计算两个输入数据的绝对值误差,用于回归模型。`reduction`参数默认值为mean,返回loss平均值结果,若`reduction`值为sum,返回loss累加结果,若`reduction`值为none,返回每个loss的结果。\n", "\n", "- MSELoss\n", "\n", " 计算两个输入数据的平方误差,用于回归模型。`reduction`参数同`L1Loss`。\n", "\n", "- SmoothL1Loss\n", "\n", " `SmoothL1Loss`为平滑L1损失函数,用于回归模型,阈值`beta`默认参数为1。\n", "\n", "- SoftmaxCrossEntropyWithLogits\n", "\n", " 交叉熵损失函数,用于分类模型。当标签数据不是one-hot编码形式时,需要输入参数`sparse`为True。`reduction`参数默认值为none,其参数含义同`L1Loss`。\n", "\n", "- CosineEmbeddingLoss\n", "\n", " `CosineEmbeddingLoss`用于衡量两个输入相似程度,用于分类模型。`margin`默认为0.0,`reduction`参数同`L1Loss`。\n", "\n", "- BCELoss\n", "\n", " 二值交叉熵损失,用于二分类。`weight`是一个batch中每个训练数据的损失的权重,默认值为None,表示权重均为1。`reduction`参数默认值为none,其参数含义同`L1Loss`。\n", "- SampledSoftmaxLoss\n", "\n", " 抽样交叉熵损失函数,用于分类模型,一般在类别数很大时使用。`num_sampled`是抽样的类别数,`num_classes`是类别总数,`num_true`是每个用例的类别数,`sampled_values`是默认值为None的抽样候选值。`remove_accidental_hits`是移除“误中抽样”的开关, `seed`是默认值为0的抽样的随机种子,`reduction`参数默认值为none,其参数含义同L1Loss。\n", "\n", "### 内置损失函数应用实例\n", "\n", "MindSpore的损失函数全部在`mindspore.nn`下,使用方法如下所示:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:22.717822Z", "start_time": "2021-12-29T03:42:20.636585Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1.5\n" ] } ], "source": [ "import numpy as np\n", "import mindspore.nn as nn\n", "from mindspore import Tensor\n", "\n", "loss = nn.L1Loss()\n", "input_data = Tensor(np.array([[1, 2, 3], [2, 3, 4]]).astype(np.float32))\n", "target_data = Tensor(np.array([[0, 2, 5], [3, 1, 1]]).astype(np.float32))\n", "print(loss(input_data, target_data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "此用例构造了两个Tensor数据,利用`nn.L1Loss`接口定义了loss,将`input_data`和`target_data`传入loss,执行L1Loss的计算,结果为1.5。若`loss = nn.L1Loss(reduction=’sum’)`,则结果为9.0。若`loss = nn.L1Loss(reduction=’none’)`,结果为`[[1. 0. 2.] [1. 2. 3.]]`。\n", "\n", "## 定义损失函数\n", "\n", "Cell是MindSpore的基本网络单元,可以用于构建网络,损失函数也需要通过Cell来定义。使用Cell定义损失函数的方法与定义一个普通的网络相同,差别在于,其执行逻辑用于计算前向网络输出与真实值之间的误差。\n", "\n", "以MindSpore提供的损失函数L1Loss为例,损失函数的定义方法如下:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:22.729232Z", "start_time": "2021-12-29T03:42:22.723517Z" } }, "outputs": [], "source": [ "import mindspore.nn as nn\n", "import mindspore.ops as ops\n", "\n", "class L1Loss(nn.Cell):\n", " def __init__(self):\n", " super(L1Loss, self).__init__()\n", " self.abs = ops.Abs()\n", " self.reduce_mean = ops.ReduceMean()\n", "\n", " def construct(self, base, target):\n", " x = self.abs(base - target)\n", " return self.reduce_mean(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "在`__init__`方法中实例化所需的算子,并在`construct`中调用这些算子。这样,一个用于计算L1Loss的损失函数就定义好了。\n", "\n", "给定一组预测值和真实值,调用损失函数,就可以得到这组预测值和真实值之间的差异,如下所示:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:22.757923Z", "start_time": "2021-12-29T03:42:22.731313Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.033333335\n" ] } ], "source": [ "import numpy as np\n", "from mindspore import Tensor\n", "\n", "loss = L1Loss()\n", "input_data = Tensor(np.array([0.1, 0.2, 0.3]).astype(np.float32))\n", "target_data = Tensor(np.array([0.1, 0.2, 0.2]).astype(np.float32))\n", "\n", "output = loss(input_data, target_data)\n", "print(output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "以`Ascend`后端为例,输出结果如下:\n", "\n", "在定义损失函数时还可以继承损失函数的基类`Loss`。`Loss`提供了`get_loss`方法,用于对损失值求和或求均值,输出一个标量。L1Loss使用`Loss`作为基类的定义如下:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:22.766767Z", "start_time": "2021-12-29T03:42:22.759510Z" } }, "outputs": [], "source": [ "import mindspore.ops as ops\n", "from mindspore.nn import LossBase\n", "\n", "class L1Loss(LossBase):\n", " def __init__(self, reduction=\"mean\"):\n", " super(L1Loss, self).__init__(reduction)\n", " self.abs = ops.Abs()\n", "\n", " def construct(self, base, target):\n", " x = self.abs(base - target)\n", " return self.get_loss(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "首先,使用`Loss`作为L1Loss的基类,然后给`__init__`增加一个参数`reduction`,并通过`super`传给基类,最后在`construct`中调用基类提供的`get_loss`方法。`reduction`的合法参数有三个,`mean`、`sum`和`none`,分别表示求均值、求和与输出原值。\n", "\n", "## 损失函数与模型训练\n", "\n", "接下来使用定义好的L1Loss进行模型训练。\n", "\n", "### 定义数据集和网络\n", "\n", "这里使用简单的线性拟场景作为样例,数据集和网络结构定义如下:\n", "\n", "> 线性拟合详细介绍可参考教程[实现简单线性函数拟合](https://www.mindspore.cn/tutorials/zh-CN/r1.6/linear_regression.html)。\n", "\n", "1. 定义数据集" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:22.928833Z", "start_time": "2021-12-29T03:42:22.768376Z" } }, "outputs": [], "source": [ "import numpy as np\n", "from mindspore import dataset as ds\n", "\n", "def get_data(num, w=2.0, b=3.0):\n", " for _ in range(num):\n", " x = np.random.uniform(-10.0, 10.0)\n", " noise = np.random.normal(0, 1)\n", " y = x * w + b + noise\n", " yield np.array([x]).astype(np.float32), np.array([y]).astype(np.float32)\n", "\n", "def create_dataset(num_data, batch_size=16):\n", " dataset = ds.GeneratorDataset(list(get_data(num_data)), column_names=['data', 'label'])\n", " dataset = dataset.batch(batch_size)\n", " return dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "2. 定义网络" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:22.934556Z", "start_time": "2021-12-29T03:42:22.930391Z" } }, "outputs": [], "source": [ "from mindspore.common.initializer import Normal\n", "import mindspore.nn as nn\n", "\n", "class LinearNet(nn.Cell):\n", " def __init__(self):\n", " super(LinearNet, self).__init__()\n", " self.fc = nn.Dense(1, 1, Normal(0.02), Normal(0.02))\n", "\n", " def construct(self, x):\n", " return self.fc(x)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 使用Model进行模型训练\n", "\n", "`Model`是MindSpore提供的用于模型训练、评估和推理的高阶API。创建数据集并定义一个`Model`就可以使用`train`接口进行模型训练。接下来我们使用`Model`进行模型训练,并采用之前定义好的`L1Loss`作为此次训练的损失函数。\n", "\n", "1. 定义前向网络、损失函数和优化器\n", "\n", " 使用之前定义的`LinearNet`和`L1Loss`作为前向网络和损失函数,并选择MindSpore提供的`Momemtum`作为优化器。" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:22.967800Z", "start_time": "2021-12-29T03:42:22.935596Z" } }, "outputs": [], "source": [ "# define network\n", "net = LinearNet()\n", "# define loss function\n", "loss = L1Loss()\n", "# define optimizer\n", "opt = nn.Momentum(net.trainable_params(), learning_rate=0.005, momentum=0.9)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "2. 定义`Model`\n", "\n", " 定义`Model`时需要指定前向网络、损失函数和优化器,`Model`内部会将它们关联起来,组成一张训练网。" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:22.985154Z", "start_time": "2021-12-29T03:42:22.970419Z" } }, "outputs": [], "source": [ "from mindspore import Model\n", "\n", "# define Model\n", "model = Model(net, loss, opt)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "3. 创建数据集,并调用`train`接口进行模型训练\n", "\n", " 调用`train`接口时必须指定迭代次数`epoch`和训练数据集`train_dataset`,我们将`epoch`设置为1,将`create_dataset`创建的数据集作为训练集。`callbacks`是`train`接口的可选参数,在`callbacks`中使用`LossMonitor`可以监控训练过程中损失函数值的变化。`dataset_sink_mode`也是一个可选参数,这里设置为`False`,表示使用非下沉模式进行训练。" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:23.310405Z", "start_time": "2021-12-29T03:42:22.987190Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 1 step: 1, loss is 10.873164\n", "epoch: 1 step: 2, loss is 8.610179\n", "epoch: 1 step: 3, loss is 11.19385\n", "epoch: 1 step: 4, loss is 9.515182\n", "epoch: 1 step: 5, loss is 10.637554\n", "epoch: 1 step: 6, loss is 7.874376\n", "epoch: 1 step: 7, loss is 10.926842\n", "epoch: 1 step: 8, loss is 8.103548\n", "epoch: 1 step: 9, loss is 7.984128\n", "epoch: 1 step: 10, loss is 5.4864826\n" ] } ], "source": [ "from mindspore.train.callback import LossMonitor\n", "\n", "# create dataset\n", "ds_train = create_dataset(num_data=160)\n", "# training\n", "model.train(epoch=1, train_dataset=ds_train, callbacks=[LossMonitor()], dataset_sink_mode=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "完整代码如下:\n", "\n", "> 下述例子中,参数初始化使用了随机值,在具体执行中输出的结果可能与本地执行输出的结果不同;如果需要稳定输出固定的值,可以设置固定的随机种子,设置方法请参考[mindspore.set_seed()](https://www.mindspore.cn/docs/api/zh-CN/r1.6/api_python/mindspore/mindspore.set_seed.html)。" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:23.488075Z", "start_time": "2021-12-29T03:42:23.312491Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 1 step: 1, loss is 10.433354\n", "epoch: 1 step: 2, loss is 10.089837\n", "epoch: 1 step: 3, loss is 8.573132\n", "epoch: 1 step: 4, loss is 9.076482\n", "epoch: 1 step: 5, loss is 11.561897\n", "epoch: 1 step: 6, loss is 7.7453585\n", "epoch: 1 step: 7, loss is 8.984859\n", "epoch: 1 step: 8, loss is 7.926591\n", "epoch: 1 step: 9, loss is 5.3957634\n", "epoch: 1 step: 10, loss is 7.3769226\n" ] } ], "source": [ "import numpy as np\n", "\n", "import mindspore.nn as nn\n", "import mindspore.ops as ops\n", "from mindspore import Model\n", "from mindspore import dataset as ds\n", "from mindspore.nn import LossBase\n", "from mindspore.common.initializer import Normal\n", "from mindspore.train.callback import LossMonitor\n", "\n", "class LinearNet(nn.Cell):\n", " def __init__(self):\n", " super(LinearNet, self).__init__()\n", " self.fc = nn.Dense(1, 1, Normal(0.02), Normal(0.02))\n", "\n", " def construct(self, x):\n", " return self.fc(x)\n", "\n", "class L1Loss(LossBase):\n", " def __init__(self, reduction=\"mean\"):\n", " super(L1Loss, self).__init__(reduction)\n", " self.abs = ops.Abs()\n", "\n", " def construct(self, base, target):\n", " x = self.abs(base - target)\n", " return self.get_loss(x)\n", "\n", "def get_data(num, w=2.0, b=3.0):\n", " for _ in range(num):\n", " x = np.random.uniform(-10.0, 10.0)\n", " noise = np.random.normal(0, 1)\n", " y = x * w + b + noise\n", " yield np.array([x]).astype(np.float32), np.array([y]).astype(np.float32)\n", "\n", "def create_dataset(num_data, batch_size=16):\n", " dataset = ds.GeneratorDataset(list(get_data(num_data)), column_names=['data', 'label'])\n", " dataset = dataset.batch(batch_size)\n", " return dataset\n", "\n", "# define network\n", "net = LinearNet()\n", "# define loss functhon\n", "loss = L1Loss()\n", "# define optimizer\n", "opt = nn.Momentum(net.trainable_params(), learning_rate=0.005, momentum=0.9)\n", "# define Model\n", "model = Model(net, loss, opt)\n", "# create dataset\n", "ds_train = create_dataset(num_data=160)\n", "# training\n", "model.train(epoch=1, train_dataset=ds_train, callbacks=[LossMonitor()], dataset_sink_mode=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 多标签损失函数与模型训练\n", "\n", "上一章定义了一个简单的损失函数`L1Loss`,其他损失函数可以仿照`L1Loss`进行编写。但许多深度学习应用的数据集较复杂,例如目标检测网络Faster R-CNN的数据中就包含多个标签,而不是简单的data和label,这时候损失函数的定义和使用略有不同。\n", "\n", "Faster R-CNN网络结构较复杂,不便在此处详细展开。本章对上一章中描述的线性拟合场景进行扩展,手动构建一个多标签数据集,介绍在这种场景下如何定义损失函数,并通过`Model`进行训练。\n", "\n", "### 定义多标签数据集\n", "\n", "首先定义数据集。对之前定义的数据集稍作修改:\n", "\n", "1. `get_multilabel_data`中产生两个标签`y1`和`y2`\n", "2. `GeneratorDataset`的`column_names`参数设置为['data', 'label1', 'label2']\n", "\n", "这样通过`create_multilabel_dataset`产生的数据集就有一个数据`data`,两个标签`label1`和`label2`。" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:23.498060Z", "start_time": "2021-12-29T03:42:23.490171Z" } }, "outputs": [], "source": [ "import numpy as np\n", "from mindspore import dataset as ds\n", "\n", "def get_multilabel_data(num, w=2.0, b=3.0):\n", " for _ in range(num):\n", " x = np.random.uniform(-10.0, 10.0)\n", " noise1 = np.random.normal(0, 1)\n", " noise2 = np.random.normal(-1, 1)\n", " y1 = x * w + b + noise1\n", " y2 = x * w + b + noise2\n", " yield np.array([x]).astype(np.float32), np.array([y1]).astype(np.float32), np.array([y2]).astype(np.float32)\n", "\n", "def create_multilabel_dataset(num_data, batch_size=16):\n", " dataset = ds.GeneratorDataset(list(get_multilabel_data(num_data)), column_names=['data', 'label1', 'label2'])\n", " dataset = dataset.batch(batch_size)\n", " return dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 定义多标签损失函数\n", "\n", "针对上一步创建的数据集,定义损失函数`L1LossForMultiLabel`。此时,损失函数`construct`的输入有三个,预测值`base`,真实值`target1`和`target2`,我们在`construct`中分别计算预测值与真实值`target1`、`target2`之间的误差,将这两个误差的均值作为最终的损失函数值,具体如下:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:23.536881Z", "start_time": "2021-12-29T03:42:23.499763Z" } }, "outputs": [], "source": [ "import mindspore.ops as ops\n", "from mindspore.nn import LossBase\n", "\n", "class L1LossForMultiLabel(LossBase):\n", " def __init__(self, reduction=\"mean\"):\n", " super(L1LossForMultiLabel, self).__init__(reduction)\n", " self.abs = ops.Abs()\n", "\n", " def construct(self, base, target1, target2):\n", " x1 = self.abs(base - target1)\n", " x2 = self.abs(base - target2)\n", " return self.get_loss(x1)/2 + self.get_loss(x2)/2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 使用Model进行多标签模型训练\n", "\n", "刚才提到过,Model内部会关联用户指定的前向网络、损失函数和优化器。其中,前向网络和损失函数是通过`nn.WithLossCell`关联起来的,`nn.WithLossCell`会将前向网络和损失函数连接起来,如下:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:23.553801Z", "start_time": "2021-12-29T03:42:23.539040Z" } }, "outputs": [], "source": [ "import mindspore.nn as nn\n", "\n", "class WithLossCell(nn.Cell):\n", " def __init__(self, backbone, loss_fn):\n", " super(WithLossCell, self).__init__(auto_prefix=False)\n", " self._backbone = backbone\n", " self._loss_fn = loss_fn\n", "\n", " def construct(self, data, label):\n", " output = self._backbone(data)\n", " return self._loss_fn(output, label)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "注意到`Model`默认使用的`nn.WithLossCell`只有两个输入,`data`和`label`,对于多个标签的场景显然不适用。此时,如果想要使用`Model`进行模型训练就需要用户将前向网络与损失函数连接起来,具体如下:\n", "\n", "1. 定义适用于当前场景的`CustomWithLossCell`\n", "\n", " 仿照`nn.WithLossCell`进行定义,将`construct`的输入修改为三个,将数据部分传给`backend`,将预测值和两个标签传给`loss_fn`。" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:23.568484Z", "start_time": "2021-12-29T03:42:23.555431Z" } }, "outputs": [], "source": [ "import mindspore.nn as nn\n", "\n", "class CustomWithLossCell(nn.Cell):\n", " def __init__(self, backbone, loss_fn):\n", " super(CustomWithLossCell, self).__init__(auto_prefix=False)\n", " self._backbone = backbone\n", " self._loss_fn = loss_fn\n", "\n", " def construct(self, data, label1, label2):\n", " output = self._backbone(data)\n", " return self._loss_fn(output, label1, label2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "2. 使用`CustomWithLossCell`将前向网络和损失函数连接起来\n", "\n", " 前向网络使用上一章定义的`LinearNet`,损失函数使用`L1LossForMultiLabel`,用`CustomWithLossCell`将它们连接起来,如下:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:23.585735Z", "start_time": "2021-12-29T03:42:23.570076Z" } }, "outputs": [], "source": [ "net = LinearNet()\n", "loss = L1LossForMultiLabel()\n", "loss_net = CustomWithLossCell(net, loss)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "这样`loss_net`中就包含了前向网络和损失函数的运算逻辑。\n", "\n", "3. 定义Model并进行模型训练\n", "\n", " `Model`的`network`指定为`loss_net`,`loss_fn`不指定,优化器仍使用`Momentum`。此时用户未指定`loss_fn`,`Model`则认为`network`内部已经实现了损失函数的逻辑,便不会用`nn.WithLossCell`对前向函数和损失函数进行封装。\n", "\n", " 使用`create_multilabel_dataset`创建多标签数据集并进行训练:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:23.849330Z", "start_time": "2021-12-29T03:42:23.587785Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 1 step: 1, loss is 9.330671\n", "epoch: 1 step: 2, loss is 9.211433\n", "epoch: 1 step: 3, loss is 10.748208\n", "epoch: 1 step: 4, loss is 9.857357\n", "epoch: 1 step: 5, loss is 10.549928\n", "epoch: 1 step: 6, loss is 7.1761103\n", "epoch: 1 step: 7, loss is 7.3835974\n", "epoch: 1 step: 8, loss is 6.8807564\n", "epoch: 1 step: 9, loss is 7.6074677\n", "epoch: 1 step: 10, loss is 6.242437\n" ] } ], "source": [ "from mindspore.train.callback import LossMonitor\n", "from mindspore import Model\n", "\n", "opt = nn.Momentum(net.trainable_params(), learning_rate=0.005, momentum=0.9)\n", "model = Model(network=loss_net, optimizer=opt)\n", "ds_train = create_multilabel_dataset(num_data=160)\n", "model.train(epoch=1, train_dataset=ds_train, callbacks=[LossMonitor()], dataset_sink_mode=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "完整代码如下:\n", "\n", "> 下述例子中,参数初始化使用了随机值,在具体执行中输出的结果可能与本地执行输出的结果不同;如果需要稳定输出固定的值,可以设置固定的随机种子,设置方法请参考[mindspore.set_seed()](https://www.mindspore.cn/docs/api/zh-CN/r1.6/api_python/mindspore/mindspore.set_seed.html)。" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "ExecuteTime": { "end_time": "2021-12-29T03:42:24.079033Z", "start_time": "2021-12-29T03:42:23.851418Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 1 step: 1, loss is 9.785895\n", "epoch: 1 step: 2, loss is 11.384043\n", "epoch: 1 step: 3, loss is 7.1100893\n", "epoch: 1 step: 4, loss is 8.7659445\n", "epoch: 1 step: 5, loss is 9.706842\n", "epoch: 1 step: 6, loss is 8.656888\n", "epoch: 1 step: 7, loss is 7.243322\n", "epoch: 1 step: 8, loss is 7.7284703\n", "epoch: 1 step: 9, loss is 6.5078716\n", "epoch: 1 step: 10, loss is 6.4996743\n" ] } ], "source": [ "import numpy as np\n", "\n", "import mindspore.nn as nn\n", "import mindspore.ops as ops\n", "from mindspore import Model\n", "from mindspore import dataset as ds\n", "from mindspore.nn import LossBase\n", "from mindspore.common.initializer import Normal\n", "from mindspore.train.callback import LossMonitor\n", "\n", "class LinearNet(nn.Cell):\n", " def __init__(self):\n", " super(LinearNet, self).__init__()\n", " self.fc = nn.Dense(1, 1, Normal(0.02), Normal(0.02))\n", "\n", " def construct(self, x):\n", " return self.fc(x)\n", "\n", "class L1LossForMultiLabel(LossBase):\n", " def __init__(self, reduction=\"mean\"):\n", " super(L1LossForMultiLabel, self).__init__(reduction)\n", " self.abs = ops.Abs()\n", "\n", " def construct(self, base, target1, target2):\n", " x1 = self.abs(base - target1)\n", " x2 = self.abs(base - target2)\n", " return self.get_loss(x1)/2 + self.get_loss(x2)/2\n", "\n", "class CustomWithLossCell(nn.Cell):\n", " def __init__(self, backbone, loss_fn):\n", " super(CustomWithLossCell, self).__init__(auto_prefix=False)\n", " self._backbone = backbone\n", " self._loss_fn = loss_fn\n", "\n", " def construct(self, data, label1, label2):\n", " output = self._backbone(data)\n", " return self._loss_fn(output, label1, label2)\n", "\n", "def get_multilabel_data(num, w=2.0, b=3.0):\n", " for _ in range(num):\n", " x = np.random.uniform(-10.0, 10.0)\n", " noise1 = np.random.normal(0, 1)\n", " noise2 = np.random.normal(-1, 1)\n", " y1 = x * w + b + noise1\n", " y2 = x * w + b + noise2\n", " yield np.array([x]).astype(np.float32), np.array([y1]).astype(np.float32), np.array([y2]).astype(np.float32)\n", "\n", "def create_multilabel_dataset(num_data, batch_size=16):\n", " dataset = ds.GeneratorDataset(list(get_multilabel_data(num_data)), column_names=['data', 'label1', 'label2'])\n", " dataset = dataset.batch(batch_size)\n", " return dataset\n", "\n", "net = LinearNet()\n", "loss = L1LossForMultiLabel()\n", "# build loss network\n", "loss_net = CustomWithLossCell(net, loss)\n", "\n", "opt = nn.Momentum(net.trainable_params(), learning_rate=0.005, momentum=0.9)\n", "model = Model(network=loss_net, optimizer=opt)\n", "ds_train = create_multilabel_dataset(num_data=160)\n", "model.train(epoch=1, train_dataset=ds_train, callbacks=[LossMonitor()], dataset_sink_mode=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "本章节简单讲解了多标签数据集场景下,如何定义损失函数并使用Model进行模型训练。在很多其他场景中,也可以采用此类方法进行模型训练。" ] } ], "metadata": { "kernelspec": { "display_name": "MindSpore", "language": "python", "name": "mindspore" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 4 }