[{"data":1,"prerenderedAt":222},["ShallowReactive",2],{"content-query-edcj15eu7i":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":216,"_id":217,"_source":218,"_file":219,"_stem":220,"_extension":221},"/technology-blogs/zh/1628","zh",false,"","YOLOv3人体目标检测模型实现（一）","利用MindSpore框架搭建YOLOv3目标检测模型，从PASCAL VOC 2012数据集中提取出的人体目标检测数据进行模型训练，得到一个人体目标检测模型。","2022-07-13","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/07/18/feabb56152b74543aa673601f8ec4244.png","technology-blogs","实践",{"type":15,"children":16,"toc":203},"root",[17,25,31,38,53,65,71,76,83,88,93,99,104,114,119,133,138,143,149,154,163,171,184,190,198],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"yolov3人体目标检测模型实现一",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":28},"p",{},[29],{"type":24,"value":30},"本项目利用MindSpore框架搭建YOLOv3目标检测模型，从PASCAL VOC 2012数据集中提取出的人体目标检测数据进行模型训练，得到一个人体目标检测模型。期望通过本次项目为MindSpore生态尽自己的一份绵薄之力。",{"type":18,"tag":32,"props":33,"children":35},"h2",{"id":34},"_1环境准备",[36],{"type":24,"value":37},"1.环境准备",{"type":18,"tag":26,"props":39,"children":40},{},[41,43,51],{"type":24,"value":42},"选择MindSpore版本为1.5或1.6，硬件为GPU。可以参照 ",{"type":18,"tag":44,"props":45,"children":49},"a",{"href":46,"rel":47},"https://www.mindspore.cn/install",[48],"nofollow",[50],{"type":24,"value":46},{"type":24,"value":52}," 根据自己的本地环境进行安装。",{"type":18,"tag":26,"props":54,"children":55},{},[56,58,63],{"type":24,"value":57},"笔者使用了华为云->ModelArt->开发环境->notebook，用这个产品的好处是它的环境包括MindSpore框架都已经装好了，笔者选择的规格是 GPU: 1*V100(32GB)|CPU: 8核 64GB ，这种规格大概要花100~200元（具体我忘了，也许更少）能跑完本项目，而且速度还挺快。升级MindSpore版本同样可参考 ",{"type":18,"tag":44,"props":59,"children":61},{"href":46,"rel":60},[48],[62],{"type":24,"value":46},{"type":24,"value":64}," 。",{"type":18,"tag":32,"props":66,"children":68},{"id":67},"_2数据集处理",[69],{"type":24,"value":70},"2.数据集处理",{"type":18,"tag":26,"props":72,"children":73},{},[74],{"type":24,"value":75},"PASCAL VOC 2012数据集包含训练集 5717 张，验证集 5823 张，共有20个检测的类别。",{"type":18,"tag":77,"props":78,"children":80},"h3",{"id":79},"_21-提取出person目标检测数据对检测框作聚类",[81],{"type":24,"value":82},"2.1 提取出\"person\"目标检测数据，对检测框作聚类",{"type":18,"tag":26,"props":84,"children":85},{},[86],{"type":24,"value":87},"我们首先要从数据集中提取带\"person\"目标的图片，然后遍历\"person\"目标检测框，以检测框长和宽为坐标选出9个聚类中心点，这9个中心点的坐标将用于作为YOLOv3的先验框大小。",{"type":18,"tag":26,"props":89,"children":90},{},[91],{"type":24,"value":92},"这部分内容的代码可以参考 附件\\choose_person 文件夹，将这个文件夹下的python代码放到voc2012数据集的 VOCtrainval_11-May-2012\\VOCdevkit 目录（该目录中有一个VOC2012文件夹，里面有ImageSets等目录），挨个运行cluster.py、rand_choose.py即可（可参考附件中的readme.txt）。在执行完两个python程序后，会得到类似于voc2012数据集 VOCtrainval_11-May-2012\\VOCdevkit\\VOC2012\\ImageSets\\Main\\ 目录下的 train.txt、val.txt 的 txt 文本文件，将它们放置于上述的 Main\\ 目录下。",{"type":18,"tag":77,"props":94,"children":96},{"id":95},"_22-数据集加载",[97],{"type":24,"value":98},"2.2 数据集加载",{"type":18,"tag":26,"props":100,"children":101},{},[102],{"type":24,"value":103},"MindSpore 为我们提供了加载 PASCAL VOC 数据集的接口 VOCDataset，可参考 MindSpore 官网中的文档使用。这里我们再为它提供一些配套的处理和封装（参见 附件\\dataset\\voc2012_dataset.py ）：",{"type":18,"tag":105,"props":106,"children":108},"pre",{"code":107},"\"\"\"读取VOC2012数据集\"\"\"\n\nimport mindspore.dataset as ds\nimport mindspore.dataset.vision.c_transforms as CV\n\nimport cv2\nimport numpy as np\n\n# 附件\\dataset\\transforms.py\nfrom transforms import reshape_fn, MultiScaleTrans\n\ndef create_voc2012_dataset(config, cv_num):\n    \"\"\"create VOC2012 dataset\"\"\"\n    \n    voc2012_dat = ds.VOCDataset(dataset_dir=config.data_path, task=\"Detection\", usage=config.data_usage, \n                         shuffle=config.data_training, num_parallel_workers=8)\n    dataset_size = voc2012_dat.get_dataset_size()\n    config.class_to_idx = voc2012_dat.get_class_indexing()\n    \n    cv2.setNumThreads(0)\n    if config.data_training:\n        \n        multi_scale_trans = MultiScaleTrans(config, cv_num)\n        \n        dataset_input_column_names = [\"image\", \"bbox\", \"label\", \"truncate\", \"difficult\"]\n        dataset_output_column_names = [\"image\", \"annotation\", \"bbox1\", \"bbox2\", \"bbox3\", \"gt_box1\", \"gt_box2\", \"gt_box3\"]\n        voc2012_dat = voc2012_dat.map(operations=CV.Decode(), input_columns=[\"image\"])\n        voc2012_dat = voc2012_dat.batch(config.batch_size, per_batch_map=multi_scale_trans, input_columns=dataset_input_column_names,\n                      output_columns=dataset_output_column_names, num_parallel_workers=8, drop_remainder=True)\n        \n        voc2012_dat = voc2012_dat.repeat(config.max_epoch-config.pretrained_epoch_num)\n        \n    else:\n        \n        img_id = np.array(range(0,dataset_size))\n        img_id = img_id.reshape((-1,1))\n        img_id = ds.GeneratorDataset(img_id, ['img_id'], shuffle=False)\n        voc2012_dat = voc2012_dat.zip(img_id)\n        \n        compose_map_func = (lambda image, img_id: reshape_fn(image, img_id, config))\n        voc2012_dat = voc2012_dat.map(operations=CV.Decode(), input_columns=[\"image\"], num_parallel_workers=8)\n        voc2012_dat = voc2012_dat.map(operations=compose_map_func, input_columns=[\"image\", \"img_id\"],\n                    output_columns=[\"image\", \"image_shape\", \"img_id\"],\n                    column_order=[\"image\", \"image_shape\", \"img_id\"],\n                    num_parallel_workers=8)\n        \n        hwc_to_chw = CV.HWC2CHW()\n        voc2012_dat = voc2012_dat.map(operations=hwc_to_chw, input_columns=[\"image\"], num_parallel_workers=8)\n        voc2012_dat = voc2012_dat.batch(config.batch_size, drop_remainder=True)\n        voc2012_dat = voc2012_dat.repeat(1)\n        \n    return voc2012_dat, dataset_size\n",[109],{"type":18,"tag":110,"props":111,"children":112},"code",{"__ignoreMap":7},[113],{"type":24,"value":107},{"type":18,"tag":26,"props":115,"children":116},{},[117],{"type":24,"value":118},"这个函数大概的意思是将 VOCDataset 读出的数据集处理成我们想要的样子，即最后返回的 voc2012_dat。dataset_size 是读出的数据集大小（图片张数）。",{"type":18,"tag":26,"props":120,"children":121},{},[122,124,131],{"type":24,"value":123},"对于训练来说（config.data_training==True），首先要将数据集处理成batch，并能以 MindSpore 的 Tensor 类型返回，这要求将一个 batch 中的图片reshape成统一大小，其中每张输入图像都对应三种尺度的输出特征图，即上方函数中的\"bbox1\"、\"bbox2\"、\"bbox3\"，函数 create_voc2012_dataset 要做的就是将数据集中的标签检测框数据根据坐标和大小映射到对应尺度的特征图中（方法是将标签检测框与选出的先验框进行IoU计算，标签检测框落入IoU得分大的对应先验框位置，详见 附件\\dataset\\transforms.py 中的 _preprocess_true_boxes 函数），最后 \"gt_box\" 以Tensor形式存放所有标签检测框，方便后续的计算。用于训练的图片还需要进行图像增强操作，包括对图像随机缩放、翻转、旋转、剪切、平移以及颜色随机变换，相应的标签检测框位置也要变换，这些操作都在 附件\\dataset\\transforms.py 文件（该文件来自于 ",{"type":18,"tag":44,"props":125,"children":128},{"href":126,"rel":127},"https://gitee.com/mindspore/models/blob/r1.5/official/cv/yolov3%5C_darknet53/src/transforms.py",[48],[129],{"type":24,"value":130},"https://gitee.com/mindspore/models/blob/r1.5/official/cv/yolov3\\_darknet53/src/transforms.py",{"type":24,"value":132}," ，我只改了少量内容，例如添加代码将VOCDataset读出的标签检测框格式[x y w h]处理成了[xmin ymin xmax ymax]）中，并最终在 MultiScaleTrans 中调用。",{"type":18,"tag":26,"props":134,"children":135},{},[136],{"type":24,"value":137},"对于测试，同样可以将图片处理成batch，也要reshape成统一大小，另外还要保留原图像的 shape （即上方函数的\"image_shape\"），方便将测试时模型推理得到的坐标映射到原图中。\"img_id\" 是从0到n-1（设数据集大小，即dataset_size为n）的数字，代表图片对应于 config.data_usage 所指向的文件（这个文件即2.1中生成的 txt 文件）的第几行，从而进一步得到对应是哪张图片。",{"type":18,"tag":26,"props":139,"children":140},{},[141],{"type":24,"value":142},"参数 config 的具体含义和设置，以及数据集的具体使用方法，请见后面的模型训练和模型测试部分。",{"type":18,"tag":32,"props":144,"children":146},{"id":145},"_3模型搭建",[147],{"type":24,"value":148},"3.模型搭建",{"type":18,"tag":26,"props":150,"children":151},{},[152],{"type":24,"value":153},"本项目采用Darknet53作为YOLOv3的主干网络，Darknet53和YOLOv3模型的结构图如下：",{"type":18,"tag":26,"props":155,"children":156},{},[157],{"type":18,"tag":158,"props":159,"children":162},"img",{"alt":160,"src":161},"DarkNet53.png","https://bbs-img.huaweicloud.com/data/forums/attachment/forum/20227/10/1657462026681173065.png",[],{"type":18,"tag":26,"props":164,"children":165},{},[166],{"type":18,"tag":158,"props":167,"children":170},{"alt":168,"src":169},"YOLOv3.jpg","https://bbs-img.huaweicloud.com/data/forums/attachment/forum/20227/10/1657462079600992323.jpg",[],{"type":18,"tag":26,"props":172,"children":173},{},[174,176,182],{"type":24,"value":175},"下面我们开始构建模型，本项目参考了 ",{"type":18,"tag":44,"props":177,"children":180},{"href":178,"rel":179},"https://gitee.com/mindspore/models/tree/r1.5/official/cv/yolov3_darknet53",[48],[181],{"type":24,"value":178},{"type":24,"value":183}," ，寻着目录往上找还能找到许多MindSpore写的常见模型，一般大家将数据处理成他们的格式直接用他们的模型就行了。",{"type":18,"tag":77,"props":185,"children":187},{"id":186},"_31-darknet53",[188],{"type":24,"value":189},"3.1 Darknet53",{"type":18,"tag":105,"props":191,"children":193},{"code":192},"\"\"\"YOLOv3 backbone: darknet53\"\"\"\n\nimport mindspore.nn as nn\nfrom mindspore.ops import operations as P\n\ndef conv_block(in_channels,\n               out_channels,\n               kernel_size,\n               stride,\n               dilation=1):\n    \"\"\"Get a conv2d batchnorm and relu layer\"\"\"\n    pad_mode = 'same'\n    padding = 0\n\n    return nn.SequentialCell(\n        [nn.Conv2d(in_channels,\n                   out_channels,\n                   kernel_size=kernel_size,\n                   stride=stride,\n                   padding=padding,\n                   dilation=dilation,\n                   pad_mode=pad_mode),\n         nn.BatchNorm2d(out_channels, momentum=0.1),\n         nn.ReLU()]\n    )\n    \nclass ResidualBlock(nn.Cell):\n    \"\"\"\n    DarkNet V1 residual block definition.\n\n    Args:\n        in_channels: Integer. Input channel.\n        out_channels: Integer. Output channel.\n\n    Returns:\n        Tensor, output tensor.\n    Examples:\n        ResidualBlock(3, 208)\n    \"\"\"\n    expansion = 4\n\n    def __init__(self,\n                 in_channels,\n                 out_channels):\n\n        super(ResidualBlock, self).__init__()\n        out_chls = out_channels//2\n        self.conv1 = conv_block(in_channels, out_chls, kernel_size=1, stride=1)\n        self.conv2 = conv_block(out_chls, out_channels, kernel_size=3, stride=1)\n        self.add = P.Add()\n\n    def construct(self, x):\n        identity = x\n        out = self.conv1(x)\n        out = self.conv2(out)\n        out = self.add(out, identity)\n\n        return out\n\nclass DarkNet(nn.Cell):\n    \"\"\"\n    DarkNet V1 network.\n\n    Args:\n        block: Cell. Block for network.\n        layer_nums: List. Numbers of different layers.\n        in_channels: Integer. Input channel.\n        out_channels: Integer. Output channel.\n        detect: Bool. Whether detect or not. Default:False.\n\n    Returns:\n        Tuple, tuple of output tensor,(f1,f2,f3,f4,f5).\n\n    Examples:\n        DarkNet(ResidualBlock,\n               [1, 2, 8, 8, 4],\n               [32, 64, 128, 256, 512],\n               [64, 128, 256, 512, 1024],\n               100)\n    \"\"\"\n    def __init__(self,\n                 block,\n                 layer_nums,\n                 in_channels,\n                 out_channels,\n                 detect=False):\n        super(DarkNet, self).__init__()\n\n        self.outchannel = out_channels[-1]\n        self.detect = detect\n\n        if not len(layer_nums) == len(in_channels) == len(out_channels) == 5:\n            raise ValueError(\"the length of layer_num, inchannel, outchannel list must be 5!\")\n        self.conv0 = conv_block(3,\n                                in_channels[0],\n                                kernel_size=3,\n                                stride=1)\n        self.conv1 = conv_block(in_channels[0],\n                                out_channels[0],\n                                kernel_size=3,\n                                stride=2)\n        self.layer1 = self._make_layer(block,\n                                       layer_nums[0],\n                                       in_channel=out_channels[0],\n                                       out_channel=out_channels[0])\n        self.conv2 = conv_block(in_channels[1],\n                                out_channels[1],\n                                kernel_size=3,\n                                stride=2)\n        self.layer2 = self._make_layer(block,\n                                       layer_nums[1],\n                                       in_channel=out_channels[1],\n                                       out_channel=out_channels[1])\n        self.conv3 = conv_block(in_channels[2],\n                                out_channels[2],\n                                kernel_size=3,\n                                stride=2)\n        self.layer3 = self._make_layer(block,\n                                       layer_nums[2],\n                                       in_channel=out_channels[2],\n                                       out_channel=out_channels[2])\n        self.conv4 = conv_block(in_channels[3],\n                                out_channels[3],\n                                kernel_size=3,\n                                stride=2)\n        self.layer4 = self._make_layer(block,\n                                       layer_nums[3],\n                                       in_channel=out_channels[3],\n                                       out_channel=out_channels[3])\n        self.conv5 = conv_block(in_channels[4],\n                                out_channels[4],\n                                kernel_size=3,\n                                stride=2)\n        self.layer5 = self._make_layer(block,\n                                       layer_nums[4],\n                                       in_channel=out_channels[4],\n                                       out_channel=out_channels[4])\n\n    def _make_layer(self, block, layer_num, in_channel, out_channel):\n        \"\"\"\n        Make Layer for DarkNet.\n\n        :param block: Cell. DarkNet block.\n        :param layer_num: Integer. Layer number.\n        :param in_channel: Integer. Input channel.\n        :param out_channel: Integer. Output channel.\n\n        Examples:\n            _make_layer(ConvBlock, 1, 128, 256)\n        \"\"\"\n        layers = []\n        darkblk = block(in_channel, out_channel)\n        layers.append(darkblk)\n\n        for _ in range(1, layer_num):\n            darkblk = block(out_channel, out_channel)\n            layers.append(darkblk)\n\n        return nn.SequentialCell(layers)\n\n    def construct(self, x):\n        c1 = self.conv0(x)\n        c2 = self.conv1(c1)\n        c3 = self.layer1(c2)\n        c4 = self.conv2(c3)\n        c5 = self.layer2(c4)\n        c6 = self.conv3(c5)\n        c7 = self.layer3(c6)\n        c8 = self.conv4(c7)\n        c9 = self.layer4(c8)\n        c10 = self.conv5(c9)\n        c11 = self.layer5(c10)\n        if self.detect:\n            return c7, c9, c11\n\n        return c11\n\n    def get_out_channels(self):\n        return self.outchannel\n\ndef get_darknet53(detect=False):\n    \"\"\"\n    Get DarkNet53 neural network.\n\n    Returns:\n        Cell, cell instance of DarkNet53 neural network.\n\n    Examples:\n        darknet53()\n    \"\"\"\n    return DarkNet(ResidualBlock, [1, 2, 8, 8, 4],\n                   [32, 64, 128, 256, 512],\n                   [64, 128, 256, 512, 1024], detect)\n",[194],{"type":18,"tag":110,"props":195,"children":196},{"__ignoreMap":7},[197],{"type":24,"value":192},{"type":18,"tag":26,"props":199,"children":200},{},[201],{"type":24,"value":202},"（未完，请见 YOLOv3人体目标检测模型实现（二））",{"title":7,"searchDepth":204,"depth":204,"links":205},4,[206,208,213],{"id":34,"depth":207,"text":37},2,{"id":67,"depth":207,"text":70,"children":209},[210,212],{"id":79,"depth":211,"text":82},3,{"id":95,"depth":211,"text":98},{"id":145,"depth":207,"text":148,"children":214},[215],{"id":186,"depth":211,"text":189},"markdown","content:technology-blogs:zh:1628.md","content","technology-blogs/zh/1628.md","technology-blogs/zh/1628","md",1776506114566]