[{"data":1,"prerenderedAt":2533},["ShallowReactive",2],{"content-query-dMdHoFCJLf":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":2527,"_id":2528,"_source":2529,"_file":2530,"_stem":2531,"_extension":2532},"/news/en/2764","en",false,"","Introduction to a New Method for Edge-Cloud Collaborative Training for Privacy Protection","To address the three main issues of the FedAvg algorithm, we propose the MistNet algorithm.","2022-12-16","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/50749225e71546aeaa6f5e291e4699f4.png","news",{"type":14,"children":15,"toc":2524},"root",[16,24,30,35,40,53,58,69,80,89,94,99,104,109,117,122,127,132,137,142,150,158,163,171,176,184,189,202,207,215,220,225,233,255,267,274,279,284,292,297,305,315,323,328,333,338,343,348,353,358,363,368,373,378,383,388,393,398,403,408,413,418,423,428,432,437,442,447,452,457,462,467,472,476,481,486,491,496,501,506,511,519,524,529,534,539,544,549,554,559,564,568,573,578,583,588,593,598,603,608,613,618,623,628,633,638,643,648,653,658,663,668,673,678,683,688,693,698,703,708,713,718,723,728,733,738,743,748,752,757,762,767,772,776,780,784,789,794,799,804,809,813,818,823,828,832,837,842,846,851,856,861,866,871,876,884,889,894,899,904,909,913,918,922,927,932,937,942,947,952,957,961,966,971,976,980,985,990,995,1000,1005,1010,1015,1020,1028,1033,1038,1043,1048,1053,1057,1062,1067,1072,1077,1082,1087,1092,1097,1102,1107,1112,1117,1122,1127,1132,1137,1142,1147,1152,1157,1162,1167,1172,1177,1182,1187,1192,1197,1202,1207,1212,1217,1222,1227,1232,1237,1241,1246,1251,1256,1261,1265,1270,1275,1280,1285,1290,1294,1299,1304,1309,1314,1318,1323,1328,1333,1338,1343,1348,1353,1358,1363,1368,1373,1378,1383,1388,1393,1398,1403,1408,1413,1418,1423,1428,1433,1437,1442,1446,1450,1455,1459,1464,1469,1474,1478,1483,1488,1493,1498,1503,1508,1513,1518,1523,1528,1533,1538,1543,1548,1553,1558,1563,1568,1573,1578,1582,1587,1591,1596,1601,1606,1611,1616,1621,1626,1631,1636,1641,1645,1650,1655,1660,1665,1670,1675,1680,1685,1690,1695,1700,1704,1709,1714,1719,1724,1729,1733,1738,1743,1748,1753,1758,1763,1768,1773,1778,1783,1788,1793,1798,1803,1808,1813,1818,1823,1828,1833,1838,1843,1848,1853,1858,1863,1868,1873,1878,1883,1888,1893,1898,1903,1908,1913,1918,1923,1928,1933,1938,1943,1948,1956,1961,1966,1971,1976,1981,1986,1991,1996,2001,2006,2011,2016,2021,2026,2031,2036,2041,2046,2051,2056,2061,2066,2070,2075,2080,2085,2090,2095,2100,2105,2110,2115,2120,2125,2130,2135,2139,2143,2147,2151,2155,2159,2164,2168,2172,2176,2181,2185,2189,2194,2199,2203,2207,2211,2215,2220,2225,2230,2234,2238,2242,2246,2250,2254,2259,2263,2268,2273,2277,2282,2287,2292,2296,2300,2305,2310,2315,2320,2325,2330,2335,2340,2344,2348,2352,2357,2361,2365,2370,2374,2379,2384,2388,2392,2396,2400,2404,2409,2413,2418,2423,2428,2433,2438,2443,2448,2453,2458,2463,2468,2473,2478,2483,2488,2493,2497,2502,2506,2514,2519],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"introduction-to-a-new-method-for-edge-cloud-collaborative-training-for-privacy-protection",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"Authors: Wang Sen, Wang Peng, Yao Xin, Cui Jinkai, Hu Qintao, Chen Renhai, Zhang Gong | Organization: Theory Lab, 2012 Laboratories",{"type":17,"tag":25,"props":31,"children":32},{},[33],{"type":23,"value":34},"Paper Title",{"type":17,"tag":25,"props":36,"children":37},{},[38],{"type":23,"value":39},"MistNet: Towards Private Neural Network Training with Local Differential Privacy",{"type":17,"tag":25,"props":41,"children":42},{},[43,45],{"type":23,"value":44},"Paper URL: ",{"type":17,"tag":46,"props":47,"children":51},"a",{"href":48,"rel":49},"https://github.com/TL-System/plato/blob/main/docs/papers/MistNet.pdf",[50],"nofollow",[52],{"type":23,"value":48},{"type":17,"tag":25,"props":54,"children":55},{},[56],{"type":23,"value":57},"Code URLs",{"type":17,"tag":25,"props":59,"children":60},{},[61,63],{"type":23,"value":62},"Plato: ",{"type":17,"tag":46,"props":64,"children":66},{"href":65},"%20https:/github.com/TL-System/plato",[67],{"type":23,"value":68},"https://github.com/TL-System/plato",{"type":17,"tag":25,"props":70,"children":71},{},[72,74],{"type":23,"value":73},"Sedna: ",{"type":17,"tag":46,"props":75,"children":78},{"href":76,"rel":77},"https://github.com/kubeedge/sedna",[50],[79],{"type":23,"value":76},{"type":17,"tag":25,"props":81,"children":82},{},[83],{"type":17,"tag":84,"props":85,"children":86},"strong",{},[87],{"type":23,"value":88},"01 Research Background",{"type":17,"tag":25,"props":90,"children":91},{},[92],{"type":23,"value":93},"Since Google first proposed federated learning in the edge AI field, it has been a rapidly developing topic in both academia and industry. Two major challenges in edge AI are data heterogeneity and data privacy, which can be addressed by applying federated learning to edge computing. FedAvg, an algorithm used in federated learning, selects clients to participate in training during each round. This reduces the communication pressure and avoids unreliable communication. Additionally, clients only need to upload training gradients, which helps prevent the leakage of user data. Nevertheless, FedAvg still faces three main bottlenecks:",{"type":17,"tag":25,"props":95,"children":96},{},[97],{"type":23,"value":98},"(1) As the size of the model increases, the volume of data transmitted surges, which can become a bottleneck that hinders system performance.",{"type":17,"tag":25,"props":100,"children":101},{},[102],{"type":23,"value":103},"(2) Research in deep learning has shown that gradients can still contain information about the native data, allowing attackers to potentially infer users' private data.",{"type":17,"tag":25,"props":105,"children":106},{},[107],{"type":23,"value":108},"(3) Edge computing capabilities vary greatly, with some devices being unable to complete the training process or slowing down the synchronization progress of federated learning due to insufficient computing power.",{"type":17,"tag":25,"props":110,"children":111},{},[112],{"type":17,"tag":84,"props":113,"children":114},{},[115],{"type":23,"value":116},"02 Paper Abstract",{"type":17,"tag":25,"props":118,"children":119},{},[120],{"type":23,"value":121},"To address the three main issues of the FedAvg algorithm, we propose the MistNet algorithm. This algorithm divides a pre-trained DNN model into two parts: a feature extractor at the edge side and a classifier on the cloud. Deep learning training rules show that new data rarely updates the parameters of the feature extractor, but does update the parameters of the classifier. As a result, we keep the edge-side parameters fixed and use the feature extractor to process input data and obtain corresponding representation data. Then we send the representation data from the client to the server, and train the classifier on the cloud. The MistNet algorithm has been optimized according to the following edge scenarios:",{"type":17,"tag":25,"props":123,"children":124},{},[125],{"type":23,"value":126},"(1) Reduces the volume of network transmission required for communication between the edge and the cloud. Instead of performing multiple rounds of gradient transmission between the cloud and edge, as is done in traditional federated learning, the extracted representation data is transmitted to the cloud for aggregated training. This reduces the frequency of network transmission between the cloud and edge, thereby reducing the overall volume of data transmitted for communication between the two.",{"type":17,"tag":25,"props":128,"children":129},{},[130],{"type":23,"value":131},"(2) Enhances privacy protection by quantifying, adding noise to, compressing and disturbing the representation data. This makes it more difficult to infer the original data from the representation data on the cloud, thereby increasing the level of privacy protection for the data.",{"type":17,"tag":25,"props":133,"children":134},{},[135],{"type":23,"value":136},"(3) Reduces computing resource requirements on the edge side by segmenting the pre-trained model and using the first several layers as a feature extractor, thereby reducing computing workloads on the client. The process of extracting features on the edge can be considered as an inference process, which allows federated learning to be completed using edge-side hardware that has only inference capabilities.",{"type":17,"tag":25,"props":138,"children":139},{},[140],{"type":23,"value":141},"Experiments have shown that the MistNet algorithm can significantly reduce communication overheads and edge computing workloads compared to the FedAvg algorithm, with reductions of up to five times and ten times, respectively. Additionally, the training accuracy of the MistNet algorithm is better than that of FedAvg, with an improvement in convergence efficiency for automatic training in object detection tasks of up to 30%.",{"type":17,"tag":25,"props":143,"children":144},{},[145],{"type":17,"tag":84,"props":146,"children":147},{},[148],{"type":23,"value":149},"03 Algorithm Framework and Technical Key Points",{"type":17,"tag":25,"props":151,"children":152},{},[153],{"type":17,"tag":84,"props":154,"children":155},{},[156],{"type":23,"value":157},"Technical Key Point 1: Model Segmentation and Representation Migration",{"type":17,"tag":25,"props":159,"children":160},{},[161],{"type":23,"value":162},"By utilizing the migration feature of the first several layers of a deep neural network, the server can train a model using existing data from a related or similar field and extract the first several layers to use as a feature extractor. The client can then obtain the feature extractor from a secure third party or server and randomly select the feature extractor and local data for fine-tuning.",{"type":17,"tag":25,"props":164,"children":165},{},[166],{"type":17,"tag":167,"props":168,"children":170},"img",{"alt":7,"src":169},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/729d5823da0a46419aeeaae18918f8f9.png",[],{"type":17,"tag":25,"props":172,"children":173},{},[174],{"type":23,"value":175},"Figure 1: Schematic diagram of feature extraction",{"type":17,"tag":25,"props":177,"children":178},{},[179],{"type":17,"tag":84,"props":180,"children":181},{},[182],{"type":23,"value":183},"Technical Key Point 2: Quantization Solution for Representation Data",{"type":17,"tag":25,"props":185,"children":186},{},[187],{"type":23,"value":188},"The communication volume can be effectively reduced by quantizing and compressing the representation data at the middle layer. An extreme solution is to use 1-bit quantization on the output of the activation function. Although this causes most of the representation data content to be lost, it effectively prevents information leakage.",{"type":17,"tag":25,"props":190,"children":191},{},[192,196,198],{"type":17,"tag":167,"props":193,"children":195},{"alt":7,"src":194},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/a4dd09b6ef51494294a818140261cb1e.png",[],{"type":23,"value":197}," ",{"type":17,"tag":167,"props":199,"children":201},{"alt":7,"src":200},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/09f088b3298f49b2b964a3cb7cfa4aab.png",[],{"type":17,"tag":25,"props":203,"children":204},{},[205],{"type":23,"value":206},"Figure 4: One-click deployment of the edge-cloud collaborative training framework for privacy protection on the Sedna platform",{"type":17,"tag":25,"props":208,"children":209},{},[210],{"type":17,"tag":84,"props":211,"children":212},{},[213],{"type":23,"value":214},"Software and Hardware",{"type":17,"tag":25,"props":216,"children":217},{},[218],{"type":23,"value":219},"Hardware: Atlas 800 (9000) + Atlas 500 (3000)",{"type":17,"tag":25,"props":221,"children":222},{},[223],{"type":23,"value":224},"Software: Ubuntu 18.04.5 LTS x86_64 + EulerOS V2R8 + CANN 5.0.2 + KubeEdge 1.8.2 + Sedna 0.4.0",{"type":17,"tag":25,"props":226,"children":227},{},[228],{"type":17,"tag":84,"props":229,"children":230},{},[231],{"type":23,"value":232},"Test Results",{"type":17,"tag":25,"props":234,"children":235},{},[236,240,241,245,246,250,251],{"type":17,"tag":167,"props":237,"children":239},{"alt":7,"src":238},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/ddb0b6a948c9417499de7939dd58b016.png",[],{"type":23,"value":197},{"type":17,"tag":167,"props":242,"children":244},{"alt":7,"src":243},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/2d096f6d73294bc7ba9874759c47c3fc.png",[],{"type":23,"value":197},{"type":17,"tag":167,"props":247,"children":249},{"alt":7,"src":248},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/f89fb2592ccf4ed7b38590868f837252.png",[],{"type":23,"value":197},{"type":17,"tag":167,"props":252,"children":254},{"alt":7,"src":253},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/a274fe7e1b2d43e08977e5d08c27bdc1.png",[],{"type":17,"tag":25,"props":256,"children":257},{},[258,260,265],{"type":23,"value":259},"(3) The complex model has a stronger resistance to noise. For 1.3% and 5.8% feature extractors, a good balance between privacy protection and precision is achieved when ",{"type":17,"tag":84,"props":261,"children":262},{},[263],{"type":23,"value":264},"Ɛ",{"type":23,"value":266}," is 1.",{"type":17,"tag":25,"props":268,"children":269},{},[270],{"type":17,"tag":167,"props":271,"children":273},{"alt":7,"src":272},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/1102e37b87d2420d8d7d64657365c528.png",[],{"type":17,"tag":25,"props":275,"children":276},{},[277],{"type":23,"value":278},"Figure 7. Defense effect against model inversion attacks.",{"type":17,"tag":25,"props":280,"children":281},{},[282],{"type":23,"value":283},"We perform white-box tests to simulate model inversion attacks and use SSIM to verify the effect. If the SSIM is less than 0.3, the original image cannot be identified. As shown in the preceding figure, most feature extractors can effectively defend against model inversion attacks after using 1-bit quantization and LDP.",{"type":17,"tag":25,"props":285,"children":286},{},[287],{"type":17,"tag":84,"props":288,"children":289},{},[290],{"type":23,"value":291},"05 Code Implementation of NPU + MindSpore + YOLOv5",{"type":17,"tag":25,"props":293,"children":294},{},[295],{"type":23,"value":296},"The code mainly includes the modules for data loading, network design, data privacy protection design, loss function design, and trainer design.",{"type":17,"tag":25,"props":298,"children":299},{},[300],{"type":17,"tag":84,"props":301,"children":302},{},[303],{"type":23,"value":304},"Data loading module:",{"type":17,"tag":306,"props":307,"children":309},"pre",{"code":308},"def _has_only_empty_bbox(anno):\n    return all(any(o \u003C= 1 for o in obj[\"bbox\"][2:]) for obj in anno)\n\n\ndef _count_visible_keypoints(anno):\n    return sum(sum(1 for v in ann[\"keypoints\"][2::3] if v > 0) for ann in anno)\n\n\ndef has_valid_annotation(anno):\n    \"\"\"Check annotation file.\"\"\"\n    # if it's empty, there is no annotation\n    if not anno:\n        return False\n    # if all boxes have close to zero area, there is no annotation\n    if _has_only_empty_bbox(anno):\n        return False\n    # keypoints task have a slight different criteria for considering\n    # if an annotation is valid\n    if \"keypoints\" not in anno[0]:\n        return True\n    # for keypoint detection tasks, only consider valid images those\n    # containing at least min_keypoints_per_image\n    if _count_visible_keypoints(anno) >= min_keypoints_per_image:\n        return True\n    return False\n\n\nclass COCOYoloDataset:\n    \"\"\"YOLOV5 Dataset for COCO.\"\"\"\n    def __init__(self, root, ann_file, remove_images_without_annotations=True,\n                 filter_crowd_anno=True, is_training=True):\n        self.coco = COCO(ann_file)\n        self.root = root\n        self.img_ids = list(sorted(self.coco.imgs.keys()))\n        self.filter_crowd_anno = filter_crowd_anno\n        self.is_training = is_training\n        self.mosaic = True\n        # filter images without any annotations\n        if remove_images_without_annotations:\n            img_ids = []\n            for img_id in self.img_ids:\n                ann_ids = self.coco.getAnnIds(imgIds=img_id, iscrowd=None)\n                anno = self.coco.loadAnns(ann_ids)\n                if has_valid_annotation(anno):\n                    img_ids.append(img_id)\n            self.img_ids = img_ids\n\n        self.categories = {cat[\"id\"]: cat[\"name\"] for cat in self.coco.cats.values()}\n\n        self.cat_ids_to_continuous_ids = {\n            v: i for i, v in enumerate(self.coco.getCatIds())\n        }\n        self.continuous_ids_cat_ids = {\n            v: k for k, v in self.cat_ids_to_continuous_ids.items()\n        }\n        self.count = 0\n\n    def _mosaic_preprocess(self, index, input_size):\n        labels4 = []\n        s = 384\n        self.mosaic_border = [-s // 2, -s // 2]\n        yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border]\n        indices = [index] + [random.randint(0, len(self.img_ids) - 1) for _ in range(3)]\n        for i, img_ids_index in enumerate(indices):\n            coco = self.coco\n            img_id = self.img_ids[img_ids_index]\n            img_path = coco.loadImgs(img_id)[0][\"file_name\"]\n            img = Image.open(os.path.join(self.root, img_path)).convert(\"RGB\")\n            img = np.array(img)\n            h, w = img.shape[:2]\n\n            if i == 0:  # top left\n                img4 = np.full((s * 2, s * 2, img.shape[2]), 128, dtype=np.uint8)  # base image with 4 tiles\n                x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc  # xmin, ymin, xmax, ymax (large image)\n                x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h  # xmin, ymin, xmax, ymax (small image)\n            elif i == 1:  # top right\n                x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc\n                x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h\n            elif i == 2:  # bottom left\n                x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)\n                x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)\n            elif i == 3:  # bottom right\n                x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)\n                x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)\n\n            img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b]  # img4[ymin:ymax, xmin:xmax]\n\n            padw = x1a - x1b\n            padh = y1a - y1b\n\n            ann_ids = coco.getAnnIds(imgIds=img_id)\n            target = coco.loadAnns(ann_ids)\n            # filter crowd annotations\n            if self.filter_crowd_anno:\n                annos = [anno for anno in target if anno[\"iscrowd\"] == 0]\n            else:\n                annos = [anno for anno in target]\n\n            target = {}\n            boxes = [anno[\"bbox\"] for anno in annos]\n            target[\"bboxes\"] = boxes\n\n            classes = [anno[\"category_id\"] for anno in annos]\n            classes = [self.cat_ids_to_continuous_ids[cl] for cl in classes]\n            target[\"labels\"] = classes\n\n            bboxes = target['bboxes']\n            labels = target['labels']\n            out_target = []\n\n            for bbox, label in zip(bboxes, labels):\n                tmp = []\n                # convert to [x_min y_min x_max y_max]\n                bbox = self._convetTopDown(bbox)\n                tmp.extend(bbox)\n                tmp.append(int(label))\n                # tmp [x_min y_min x_max y_max, label]\n                out_target.append(tmp)  # out_target indicates the actual width and height of label, which corresponds to the actual measurement values of the image.\n\n            labels = out_target.copy()\n            labels = np.array(labels)\n            out_target = np.array(out_target)\n\n            labels[:, 0] = out_target[:, 0] + padw\n            labels[:, 1] = out_target[:, 1] + padh\n            labels[:, 2] = out_target[:, 2] + padw\n            labels[:, 3] = out_target[:, 3] + padh\n            labels4.append(labels)\n\n        if labels4:\n            labels4 = np.concatenate(labels4, 0)\n            np.clip(labels4[:, :4], 0, 2 * s, out=labels4[:, :4])  # use with random_perspective\n        flag = np.array([1])\n        return img4, labels4, input_size, flag\n\n    def __getitem__(self, index):\n        \"\"\"\n        Args:\n            index (int): Index\n\n        Returns:\n            (img, target) (tuple): target is a dictionary contains \"bbox\", \"segmentation\" or \"keypoints\",\n                generated by the image's annotation. img is a PIL image.\n        \"\"\"\n        coco = self.coco\n        img_id = self.img_ids[index]\n        img_path = coco.loadImgs(img_id)[0][\"file_name\"]\n        if not self.is_training:\n            img = Image.open(os.path.join(self.root, img_path)).convert(\"RGB\")\n            return img, img_id\n\n        input_size = [640, 640]\n        if self.mosaic and random.random() \u003C 0.5:\n            return self._mosaic_preprocess(index, input_size)\n        img = np.fromfile(os.path.join(self.root, img_path), dtype='int8')\n        ann_ids = coco.getAnnIds(imgIds=img_id)\n        target = coco.loadAnns(ann_ids)\n        # filter crowd annotations\n        if self.filter_crowd_anno:\n            annos = [anno for anno in target if anno[\"iscrowd\"] == 0]\n        else:\n            annos = [anno for anno in target]\n\n        target = {}\n        boxes = [anno[\"bbox\"] for anno in annos]\n        target[\"bboxes\"] = boxes\n\n        classes = [anno[\"category_id\"] for anno in annos]\n        classes = [self.cat_ids_to_continuous_ids[cl] for cl in classes]\n        target[\"labels\"] = classes\n\n        bboxes = target['bboxes']\n        labels = target['labels']\n        out_target = []\n        for bbox, label in zip(bboxes, labels):\n            tmp = []\n            # convert to [x_min y_min x_max y_max]\n            bbox = self._convetTopDown(bbox)\n            tmp.extend(bbox)\n            tmp.append(int(label))\n            # tmp [x_min y_min x_max y_max, label]\n            out_target.append(tmp)\n        flag = np.array([0])\n        return img, out_target, input_size, flag\n\n    def __len__(self):\n        return len(self.img_ids)\n\n    def _convetTopDown(self, bbox):\n        x_min = bbox[0]\n        y_min = bbox[1]\n        w = bbox[2]\n        h = bbox[3]\n        return [x_min, y_min, x_min+w, y_min+h]\n\n\ndef create_yolo_dataset(image_dir, anno_path, batch_size, max_epoch, device_num, rank,\n                        config=None, is_training=True, shuffle=True):\n    \"\"\"Create dataset for YOLOV5.\"\"\"\n    cv2.setNumThreads(0)\n    de.config.set_enable_shared_mem(True)\n    if is_training:\n        filter_crowd = True\n        remove_empty_anno = True\n    else:\n        filter_crowd = False\n        remove_empty_anno = False\n\n    yolo_dataset = COCOYoloDataset(root=image_dir, ann_file=anno_path, filter_crowd_anno=filter_crowd,\n                                   remove_images_without_annotations=remove_empty_anno, is_training=is_training)\n    distributed_sampler = DistributedSampler(len(yolo_dataset), device_num, rank, shuffle=shuffle)\n    yolo_dataset.size = len(distributed_sampler)\n    hwc_to_chw = CV.HWC2CHW()\n\n    config.dataset_size = len(yolo_dataset)\n    cores = multiprocessing.cpu_count()\n    num_parallel_workers = int(cores / device_num)\n    if is_training:\n        multi_scale_trans = MultiScaleTrans(config, device_num)\n        yolo_dataset.transforms = multi_scale_trans\n\n        dataset_column_names = [\"image\", \"annotation\", \"input_size\", \"mosaic_flag\"]\n        output_column_names = [\"image\", \"annotation\", \"bbox1\", \"bbox2\", \"bbox3\",\n                               \"gt_box1\", \"gt_box2\", \"gt_box3\"]\n        map1_out_column_names = [\"image\", \"annotation\", \"size\"]\n        map2_in_column_names = [\"annotation\", \"size\"]\n        map2_out_column_names = [\"annotation\", \"bbox1\", \"bbox2\", \"bbox3\",\n                                 \"gt_box1\", \"gt_box2\", \"gt_box3\"]\n\n        ds = de.GeneratorDataset(yolo_dataset, column_names=dataset_column_names, sampler=distributed_sampler,\n                                 python_multiprocessing=True, num_parallel_workers=min(4, num_parallel_workers))\n        ds = ds.map(operations=multi_scale_trans, input_columns=dataset_column_names,\n                    output_columns=map1_out_column_names, column_order=map1_out_column_names,\n                    num_parallel_workers=min(12, num_parallel_workers), python_multiprocessing=True)\n        ds = ds.map(operations=PreprocessTrueBox(config), input_columns=map2_in_column_names,\n                    output_columns=map2_out_column_names, column_order=output_column_names,\n                    num_parallel_workers=min(4, num_parallel_workers), python_multiprocessing=False)\n        mean = [m * 255 for m in [0.485, 0.456, 0.406]]\n        std = [s * 255 for s in [0.229, 0.224, 0.225]]\n        ds = ds.map([CV.Normalize(mean, std),\n                     hwc_to_chw], num_parallel_workers=min(4, num_parallel_workers))\n\n        def concatenate(images):\n            images = np.concatenate((images[..., ::2, ::2], images[..., 1::2, ::2],\n                                     images[..., ::2, 1::2], images[..., 1::2, 1::2]), axis=0)\n            return images\n        ds = ds.map(operations=concatenate, input_columns=\"image\", num_parallel_workers=min(4, num_parallel_workers))\n        ds = ds.batch(batch_size, num_parallel_workers=min(4, num_parallel_workers), drop_remainder=True)\n    else:\n        ds = de.GeneratorDataset(yolo_dataset, column_names=[\"image\", \"img_id\"],\n                                 sampler=distributed_sampler)\n        compose_map_func = (lambda image, img_id: reshape_fn(image, img_id, config))\n        ds = ds.map(operations=compose_map_func, input_columns=[\"image\", \"img_id\"],\n                    output_columns=[\"image\", \"image_shape\", \"img_id\"],\n                    column_order=[\"image\", \"image_shape\", \"img_id\"],\n                    num_parallel_workers=8)\n        ds = ds.map(operations=hwc_to_chw, input_columns=[\"image\"], num_parallel_workers=8)\n        ds = ds.batch(batch_size, drop_remainder=True)\n    ds = ds.repeat(max_epoch)\n    return ds, len(yolo_dataset)\n",[310],{"type":17,"tag":311,"props":312,"children":313},"code",{"__ignoreMap":7},[314],{"type":23,"value":308},{"type":17,"tag":25,"props":316,"children":317},{},[318],{"type":17,"tag":84,"props":319,"children":320},{},[321],{"type":23,"value":322},"Network design module:",{"type":17,"tag":25,"props":324,"children":325},{},[326],{"type":23,"value":327},"BackBone is divided into two parts, one on the client and the other on the server.",{"type":17,"tag":25,"props":329,"children":330},{},[331],{"type":23,"value":332},"class YOLOv5Backbone_from(nn.Cell):",{"type":17,"tag":25,"props":334,"children":335},{},[336],{"type":23,"value":337},"def __init__(self):",{"type":17,"tag":25,"props":339,"children":340},{},[341],{"type":23,"value":342},"super(YOLOv5Backbone_from, self).__init__()",{"type":17,"tag":25,"props":344,"children":345},{},[346],{"type":23,"value":347},"self.tenser_to_array = P.TupleToArray()",{"type":17,"tag":25,"props":349,"children":350},{},[351],{"type":23,"value":352},"self.focusv2 = Focusv2(3, 32, k=3, s=1)",{"type":17,"tag":25,"props":354,"children":355},{},[356],{"type":23,"value":357},"self.conv1 = Conv(32, 64, k=3, s=2)",{"type":17,"tag":25,"props":359,"children":360},{},[361],{"type":23,"value":362},"self.C31 = C3(64, 64, n=1)",{"type":17,"tag":25,"props":364,"children":365},{},[366],{"type":23,"value":367},"self.conv2 = Conv(64, 128, k=3, s=2)",{"type":17,"tag":25,"props":369,"children":370},{},[371],{"type":23,"value":372},"def construct(self, x, input_shape):",{"type":17,"tag":25,"props":374,"children":375},{},[376],{"type":23,"value":377},"\"\"\"construct method\"\"\"",{"type":17,"tag":25,"props":379,"children":380},{},[381],{"type":23,"value":382},"#img_hight = P.Shape()(x)[2] * 2",{"type":17,"tag":25,"props":384,"children":385},{},[386],{"type":23,"value":387},"#img_width = P.Shape()(x)[3] * 2",{"type":17,"tag":25,"props":389,"children":390},{},[391],{"type":23,"value":392},"input_shape = F.shape(x)[2:4]",{"type":17,"tag":25,"props":394,"children":395},{},[396],{"type":23,"value":397},"input_shape = F.cast(self.tenser_to_array(input_shape) * 2, ms.float32)",{"type":17,"tag":25,"props":399,"children":400},{},[401],{"type":23,"value":402},"fcs = self.focusv2(x)",{"type":17,"tag":25,"props":404,"children":405},{},[406],{"type":23,"value":407},"cv1 = self.conv1(fcs)",{"type":17,"tag":25,"props":409,"children":410},{},[411],{"type":23,"value":412},"bcsp1 = self.C31(cv1)",{"type":17,"tag":25,"props":414,"children":415},{},[416],{"type":23,"value":417},"cv2 = self.conv2(bcsp1)",{"type":17,"tag":25,"props":419,"children":420},{},[421],{"type":23,"value":422},"return cv2, input_shape",{"type":17,"tag":25,"props":424,"children":425},{},[426],{"type":23,"value":427},"class YOLOv5Backbone_to(nn.Cell):",{"type":17,"tag":25,"props":429,"children":430},{},[431],{"type":23,"value":337},{"type":17,"tag":25,"props":433,"children":434},{},[435],{"type":23,"value":436},"super(YOLOv5Backbone_to, self).__init__()",{"type":17,"tag":25,"props":438,"children":439},{},[440],{"type":23,"value":441},"self.C32 = C3(128, 128, n=3)",{"type":17,"tag":25,"props":443,"children":444},{},[445],{"type":23,"value":446},"self.conv3 = Conv(128, 256, k=3, s=2)",{"type":17,"tag":25,"props":448,"children":449},{},[450],{"type":23,"value":451},"self.C33 = C3(256, 256, n=3)",{"type":17,"tag":25,"props":453,"children":454},{},[455],{"type":23,"value":456},"self.conv4 = Conv(256, 512, k=3, s=2)",{"type":17,"tag":25,"props":458,"children":459},{},[460],{"type":23,"value":461},"self.spp = SPP(512, 512, k=[5, 9, 13])",{"type":17,"tag":25,"props":463,"children":464},{},[465],{"type":23,"value":466},"self.C34 = C3(512, 512, n=1, shortcut=False)",{"type":17,"tag":25,"props":468,"children":469},{},[470],{"type":23,"value":471},"def construct(self, cv2):",{"type":17,"tag":25,"props":473,"children":474},{},[475],{"type":23,"value":377},{"type":17,"tag":25,"props":477,"children":478},{},[479],{"type":23,"value":480},"bcsp2 = self.C32(cv2)",{"type":17,"tag":25,"props":482,"children":483},{},[484],{"type":23,"value":485},"cv3 = self.conv3(bcsp2)",{"type":17,"tag":25,"props":487,"children":488},{},[489],{"type":23,"value":490},"bcsp3 = self.C33(cv3)",{"type":17,"tag":25,"props":492,"children":493},{},[494],{"type":23,"value":495},"cv4 = self.conv4(bcsp3)",{"type":17,"tag":25,"props":497,"children":498},{},[499],{"type":23,"value":500},"spp1 = self.spp(cv4)",{"type":17,"tag":25,"props":502,"children":503},{},[504],{"type":23,"value":505},"bcsp4 = self.C34(spp1)",{"type":17,"tag":25,"props":507,"children":508},{},[509],{"type":23,"value":510},"return bcsp2, bcsp3, bcsp4",{"type":17,"tag":25,"props":512,"children":513},{},[514],{"type":17,"tag":84,"props":515,"children":516},{},[517],{"type":23,"value":518},"Overall network architecture of the server:",{"type":17,"tag":25,"props":520,"children":521},{},[522],{"type":23,"value":523},"class YOLOV5s(nn.Cell):",{"type":17,"tag":25,"props":525,"children":526},{},[527],{"type":23,"value":528},"\"\"\"",{"type":17,"tag":25,"props":530,"children":531},{},[532],{"type":23,"value":533},"YOLOV5 network.",{"type":17,"tag":25,"props":535,"children":536},{},[537],{"type":23,"value":538},"Args:",{"type":17,"tag":25,"props":540,"children":541},{},[542],{"type":23,"value":543},"is_training: Bool. Whether train or not.",{"type":17,"tag":25,"props":545,"children":546},{},[547],{"type":23,"value":548},"Returns:",{"type":17,"tag":25,"props":550,"children":551},{},[552],{"type":23,"value":553},"Cell, cell instance of YOLOV5 neural network.",{"type":17,"tag":25,"props":555,"children":556},{},[557],{"type":23,"value":558},"Examples:",{"type":17,"tag":25,"props":560,"children":561},{},[562],{"type":23,"value":563},"YOLOV5s(True)",{"type":17,"tag":25,"props":565,"children":566},{},[567],{"type":23,"value":528},{"type":17,"tag":25,"props":569,"children":570},{},[571],{"type":23,"value":572},"def __init__(self, is_training):",{"type":17,"tag":25,"props":574,"children":575},{},[576],{"type":23,"value":577},"super(YOLOV5s, self).__init__()",{"type":17,"tag":25,"props":579,"children":580},{},[581],{"type":23,"value":582},"self.config = ConfigYOLOV5()",{"type":17,"tag":25,"props":584,"children":585},{},[586],{"type":23,"value":587},"# YOLOv5 network",{"type":17,"tag":25,"props":589,"children":590},{},[591],{"type":23,"value":592},"self.feature_map = YOLOv5(backbone=YOLOv5Backbone_to(),",{"type":17,"tag":25,"props":594,"children":595},{},[596],{"type":23,"value":597},"out_channel=self.config.out_channel)",{"type":17,"tag":25,"props":599,"children":600},{},[601],{"type":23,"value":602},"# prediction on the default anchor boxes",{"type":17,"tag":25,"props":604,"children":605},{},[606],{"type":23,"value":607},"self.detect_1 = DetectionBlock('l', is_training=is_training)",{"type":17,"tag":25,"props":609,"children":610},{},[611],{"type":23,"value":612},"self.detect_2 = DetectionBlock('m', is_training=is_training)",{"type":17,"tag":25,"props":614,"children":615},{},[616],{"type":23,"value":617},"self.detect_3 = DetectionBlock('s', is_training=is_training)",{"type":17,"tag":25,"props":619,"children":620},{},[621],{"type":23,"value":622},"def construct(self, x, img_hight, img_width, input_shape):",{"type":17,"tag":25,"props":624,"children":625},{},[626],{"type":23,"value":627},"small_object_output, medium_object_output, big_object_output = self.feature_map(x, img_hight, img_width)",{"type":17,"tag":25,"props":629,"children":630},{},[631],{"type":23,"value":632},"output_big = self.detect_1(big_object_output, input_shape)",{"type":17,"tag":25,"props":634,"children":635},{},[636],{"type":23,"value":637},"output_me = self.detect_2(medium_object_output, input_shape)",{"type":17,"tag":25,"props":639,"children":640},{},[641],{"type":23,"value":642},"output_small = self.detect_3(small_object_output, input_shape)",{"type":17,"tag":25,"props":644,"children":645},{},[646],{"type":23,"value":647},"# big is the final output which has smallest feature map",{"type":17,"tag":25,"props":649,"children":650},{},[651],{"type":23,"value":652},"return output_big, output_me, output_small",{"type":17,"tag":25,"props":654,"children":655},{},[656],{"type":23,"value":657},"class YOLOv5(nn.Cell):",{"type":17,"tag":25,"props":659,"children":660},{},[661],{"type":23,"value":662},"def __init__(self, backbone, out_channel):",{"type":17,"tag":25,"props":664,"children":665},{},[666],{"type":23,"value":667},"super(YOLOv5, self).__init__()",{"type":17,"tag":25,"props":669,"children":670},{},[671],{"type":23,"value":672},"self.out_channel = out_channel",{"type":17,"tag":25,"props":674,"children":675},{},[676],{"type":23,"value":677},"self.backbone = backbone",{"type":17,"tag":25,"props":679,"children":680},{},[681],{"type":23,"value":682},"#print(\"self.backbone: \", self.backbone)",{"type":17,"tag":25,"props":684,"children":685},{},[686],{"type":23,"value":687},"self.conv1 = Conv(512, 256, k=1, s=1) # 10",{"type":17,"tag":25,"props":689,"children":690},{},[691],{"type":23,"value":692},"self.C31 = C3(512, 256, n=1, shortcut=False) # 11",{"type":17,"tag":25,"props":694,"children":695},{},[696],{"type":23,"value":697},"self.conv2 = Conv(256, 128, k=1, s=1)",{"type":17,"tag":25,"props":699,"children":700},{},[701],{"type":23,"value":702},"self.C32 = C3(256, 128, n=1, shortcut=False) # 13",{"type":17,"tag":25,"props":704,"children":705},{},[706],{"type":23,"value":707},"self.conv3 = Conv(128, 128, k=3, s=2)",{"type":17,"tag":25,"props":709,"children":710},{},[711],{"type":23,"value":712},"self.C33 = C3(256, 256, n=1, shortcut=False) # 15",{"type":17,"tag":25,"props":714,"children":715},{},[716],{"type":23,"value":717},"self.conv4 = Conv(256, 256, k=3, s=2)",{"type":17,"tag":25,"props":719,"children":720},{},[721],{"type":23,"value":722},"self.C34 = C3(512, 512, n=1, shortcut=False) # 17",{"type":17,"tag":25,"props":724,"children":725},{},[726],{"type":23,"value":727},"self.backblock1 = YoloBlock(128, 255)",{"type":17,"tag":25,"props":729,"children":730},{},[731],{"type":23,"value":732},"self.backblock2 = YoloBlock(256, 255)",{"type":17,"tag":25,"props":734,"children":735},{},[736],{"type":23,"value":737},"self.backblock3 = YoloBlock(512, 255)",{"type":17,"tag":25,"props":739,"children":740},{},[741],{"type":23,"value":742},"self.concat = P.Concat(axis=1)",{"type":17,"tag":25,"props":744,"children":745},{},[746],{"type":23,"value":747},"def construct(self, x, img_hight, img_width):",{"type":17,"tag":25,"props":749,"children":750},{},[751],{"type":23,"value":528},{"type":17,"tag":25,"props":753,"children":754},{},[755],{"type":23,"value":756},"input_shape of x is (batch_size, 3, h, w)",{"type":17,"tag":25,"props":758,"children":759},{},[760],{"type":23,"value":761},"feature_map1 is (batch_size, backbone_shape[2], h/8, w/8)",{"type":17,"tag":25,"props":763,"children":764},{},[765],{"type":23,"value":766},"feature_map2 is (batch_size, backbone_shape[3], h/16, w/16)",{"type":17,"tag":25,"props":768,"children":769},{},[770],{"type":23,"value":771},"feature_map3 is (batch_size, backbone_shape[4], h/32, w/32)",{"type":17,"tag":25,"props":773,"children":774},{},[775],{"type":23,"value":528},{"type":17,"tag":25,"props":777,"children":778},{},[779],{"type":23,"value":382},{"type":17,"tag":25,"props":781,"children":782},{},[783],{"type":23,"value":387},{"type":17,"tag":25,"props":785,"children":786},{},[787],{"type":23,"value":788},"backbone4, backbone6, backbone9 = self.backbone(x)",{"type":17,"tag":25,"props":790,"children":791},{},[792],{"type":23,"value":793},"cv1 = self.conv1(backbone9) # 10",{"type":17,"tag":25,"props":795,"children":796},{},[797],{"type":23,"value":798},"ups1 = P.ResizeNearestNeighbor((img_hight / 16, img_width / 16))(cv1)",{"type":17,"tag":25,"props":800,"children":801},{},[802],{"type":23,"value":803},"concat1 = self.concat((ups1, backbone6))",{"type":17,"tag":25,"props":805,"children":806},{},[807],{"type":23,"value":808},"bcsp1 = self.C31(concat1) # 13",{"type":17,"tag":25,"props":810,"children":811},{},[812],{"type":23,"value":417},{"type":17,"tag":25,"props":814,"children":815},{},[816],{"type":23,"value":817},"ups2 = P.ResizeNearestNeighbor((img_hight / 8, img_width / 8))(cv2) # 15",{"type":17,"tag":25,"props":819,"children":820},{},[821],{"type":23,"value":822},"concat2 = self.concat((ups2, backbone4))",{"type":17,"tag":25,"props":824,"children":825},{},[826],{"type":23,"value":827},"bcsp2 = self.C32(concat2) # 17",{"type":17,"tag":25,"props":829,"children":830},{},[831],{"type":23,"value":485},{"type":17,"tag":25,"props":833,"children":834},{},[835],{"type":23,"value":836},"concat3 = self.concat((cv3, cv2))",{"type":17,"tag":25,"props":838,"children":839},{},[840],{"type":23,"value":841},"bcsp3 = self.C33(concat3) # 20",{"type":17,"tag":25,"props":843,"children":844},{},[845],{"type":23,"value":495},{"type":17,"tag":25,"props":847,"children":848},{},[849],{"type":23,"value":850},"concat4 = self.concat((cv4, cv1))",{"type":17,"tag":25,"props":852,"children":853},{},[854],{"type":23,"value":855},"bcsp4 = self.C34(concat4) # 23",{"type":17,"tag":25,"props":857,"children":858},{},[859],{"type":23,"value":860},"small_object_output = self.backblock1(bcsp2) # h/8, w/8",{"type":17,"tag":25,"props":862,"children":863},{},[864],{"type":23,"value":865},"medium_object_output = self.backblock2(bcsp3) # h/16, w/16",{"type":17,"tag":25,"props":867,"children":868},{},[869],{"type":23,"value":870},"big_object_output = self.backblock3(bcsp4) # h/32, w/32",{"type":17,"tag":25,"props":872,"children":873},{},[874],{"type":23,"value":875},"return small_object_output, medium_object_output, big_object_output",{"type":17,"tag":25,"props":877,"children":878},{},[879],{"type":17,"tag":84,"props":880,"children":881},{},[882],{"type":23,"value":883},"Data privacy protection design module:",{"type":17,"tag":25,"props":885,"children":886},{},[887],{"type":23,"value":888},"def encode_1b(x):",{"type":17,"tag":25,"props":890,"children":891},{},[892],{"type":23,"value":893},"x[(x \u003C= 0)] = 0",{"type":17,"tag":25,"props":895,"children":896},{},[897],{"type":23,"value":898},"x[(x > 0)] = 1",{"type":17,"tag":25,"props":900,"children":901},{},[902],{"type":23,"value":903},"return x",{"type":17,"tag":25,"props":905,"children":906},{},[907],{"type":23,"value":908},"def randomize_1b(bit_tensor, epsilon):",{"type":17,"tag":25,"props":910,"children":911},{},[912],{"type":23,"value":528},{"type":17,"tag":25,"props":914,"children":915},{},[916],{"type":23,"value":917},"The default unary encoding method is symmetric.",{"type":17,"tag":25,"props":919,"children":920},{},[921],{"type":23,"value":528},{"type":17,"tag":25,"props":923,"children":924},{},[925],{"type":23,"value":926},"#assert isinstance(bit_tensor, tensor), 'the type of input data is not matched with the expected type(tensor)'",{"type":17,"tag":25,"props":928,"children":929},{},[930],{"type":23,"value":931},"return symmetric_tensor_encoding_1b(bit_tensor, epsilon)",{"type":17,"tag":25,"props":933,"children":934},{},[935],{"type":23,"value":936},"def symmetric_tensor_encoding_1b(bit_tensor, epsilon):",{"type":17,"tag":25,"props":938,"children":939},{},[940],{"type":23,"value":941},"p = mnp.exp(epsilon / 2) / (mnp.exp(epsilon / 2) + 1)",{"type":17,"tag":25,"props":943,"children":944},{},[945],{"type":23,"value":946},"q = 1 / (mnp.exp(epsilon / 2) + 1)",{"type":17,"tag":25,"props":948,"children":949},{},[950],{"type":23,"value":951},"return produce_random_response_1b(bit_tensor, p, q)",{"type":17,"tag":25,"props":953,"children":954},{},[955],{"type":23,"value":956},"def produce_random_response_1b(bit_tensor, p, q=None):",{"type":17,"tag":25,"props":958,"children":959},{},[960],{"type":23,"value":528},{"type":17,"tag":25,"props":962,"children":963},{},[964],{"type":23,"value":965},"Implements random response as the perturbation method.",{"type":17,"tag":25,"props":967,"children":968},{},[969],{"type":23,"value":970},"when using torch tensor, we use Uniform Distribution to create Binomial Distribution",{"type":17,"tag":25,"props":972,"children":973},{},[974],{"type":23,"value":975},"because torch have not binomial function",{"type":17,"tag":25,"props":977,"children":978},{},[979],{"type":23,"value":528},{"type":17,"tag":25,"props":981,"children":982},{},[983],{"type":23,"value":984},"q = 1 - p if q is None else q",{"type":17,"tag":25,"props":986,"children":987},{},[988],{"type":23,"value":989},"uniformreal = mindspore.ops.UniformReal(seed=2)",{"type":17,"tag":25,"props":991,"children":992},{},[993],{"type":23,"value":994},"binomial = uniformreal(bit_tensor.shape)",{"type":17,"tag":25,"props":996,"children":997},{},[998],{"type":23,"value":999},"zeroslike = mindspore.ops.ZerosLike()",{"type":17,"tag":25,"props":1001,"children":1002},{},[1003],{"type":23,"value":1004},"oneslike = mindspore.ops.OnesLike()",{"type":17,"tag":25,"props":1006,"children":1007},{},[1008],{"type":23,"value":1009},"p_binomial = mnp.where(binomial > q, oneslike(bit_tensor), zeroslike(bit_tensor))",{"type":17,"tag":25,"props":1011,"children":1012},{},[1013],{"type":23,"value":1014},"q_binomial = mnp.where(binomial \u003C= q, oneslike(bit_tensor), zeroslike(bit_tensor))",{"type":17,"tag":25,"props":1016,"children":1017},{},[1018],{"type":23,"value":1019},"return mnp.where(bit_tensor == 1, p_binomial, q_binomial)",{"type":17,"tag":25,"props":1021,"children":1022},{},[1023],{"type":17,"tag":84,"props":1024,"children":1025},{},[1026],{"type":23,"value":1027},"Loss function module:",{"type":17,"tag":25,"props":1029,"children":1030},{},[1031],{"type":23,"value":1032},"class YoloWithLossCell(nn.Cell):",{"type":17,"tag":25,"props":1034,"children":1035},{},[1036],{"type":23,"value":1037},"\"\"\"YOLOV5 loss.\"\"\"",{"type":17,"tag":25,"props":1039,"children":1040},{},[1041],{"type":23,"value":1042},"def __init__(self, network):",{"type":17,"tag":25,"props":1044,"children":1045},{},[1046],{"type":23,"value":1047},"super(YoloWithLossCell, self).__init__()",{"type":17,"tag":25,"props":1049,"children":1050},{},[1051],{"type":23,"value":1052},"self.yolo_network = network",{"type":17,"tag":25,"props":1054,"children":1055},{},[1056],{"type":23,"value":582},{"type":17,"tag":25,"props":1058,"children":1059},{},[1060],{"type":23,"value":1061},"self.loss_big = YoloLossBlock('l', self.config)",{"type":17,"tag":25,"props":1063,"children":1064},{},[1065],{"type":23,"value":1066},"self.loss_me = YoloLossBlock('m', self.config)",{"type":17,"tag":25,"props":1068,"children":1069},{},[1070],{"type":23,"value":1071},"self.loss_small = YoloLossBlock('s', self.config)",{"type":17,"tag":25,"props":1073,"children":1074},{},[1075],{"type":23,"value":1076},"def construct(self, x, y_true_0, y_true_1, y_true_2, gt_0, gt_1, gt_2, img_hight, img_width, input_shape):",{"type":17,"tag":25,"props":1078,"children":1079},{},[1080],{"type":23,"value":1081},"yolo_out = self.yolo_network(x, img_hight, img_width, input_shape)",{"type":17,"tag":25,"props":1083,"children":1084},{},[1085],{"type":23,"value":1086},"loss_l = self.loss_big(*yolo_out[0], y_true_0, gt_0, input_shape)",{"type":17,"tag":25,"props":1088,"children":1089},{},[1090],{"type":23,"value":1091},"loss_m = self.loss_me(*yolo_out[1], y_true_1, gt_1, input_shape)",{"type":17,"tag":25,"props":1093,"children":1094},{},[1095],{"type":23,"value":1096},"loss_s = self.loss_small(*yolo_out[2], y_true_2, gt_2, input_shape)",{"type":17,"tag":25,"props":1098,"children":1099},{},[1100],{"type":23,"value":1101},"return loss_l + loss_m + loss_s * 0.2",{"type":17,"tag":25,"props":1103,"children":1104},{},[1105],{"type":23,"value":1106},"class TrainingWrapper(nn.Cell):",{"type":17,"tag":25,"props":1108,"children":1109},{},[1110],{"type":23,"value":1111},"\"\"\"Training wrapper.\"\"\"",{"type":17,"tag":25,"props":1113,"children":1114},{},[1115],{"type":23,"value":1116},"def __init__(self, network, optimizer, sens=1.0):",{"type":17,"tag":25,"props":1118,"children":1119},{},[1120],{"type":23,"value":1121},"super(TrainingWrapper, self).__init__(auto_prefix=False)",{"type":17,"tag":25,"props":1123,"children":1124},{},[1125],{"type":23,"value":1126},"self.network = network",{"type":17,"tag":25,"props":1128,"children":1129},{},[1130],{"type":23,"value":1131},"self.network.set_grad()",{"type":17,"tag":25,"props":1133,"children":1134},{},[1135],{"type":23,"value":1136},"self.weights = optimizer.parameters",{"type":17,"tag":25,"props":1138,"children":1139},{},[1140],{"type":23,"value":1141},"self.optimizer = optimizer",{"type":17,"tag":25,"props":1143,"children":1144},{},[1145],{"type":23,"value":1146},"self.grad = C.GradOperation(get_by_list=True, sens_param=True)",{"type":17,"tag":25,"props":1148,"children":1149},{},[1150],{"type":23,"value":1151},"self.sens = sens",{"type":17,"tag":25,"props":1153,"children":1154},{},[1155],{"type":23,"value":1156},"self.reducer_flag = False",{"type":17,"tag":25,"props":1158,"children":1159},{},[1160],{"type":23,"value":1161},"self.grad_reducer = None",{"type":17,"tag":25,"props":1163,"children":1164},{},[1165],{"type":23,"value":1166},"self.parallel_mode = context.get_auto_parallel_context(\"parallel_mode\")",{"type":17,"tag":25,"props":1168,"children":1169},{},[1170],{"type":23,"value":1171},"if self.parallel_mode in [ParallelMode.DATA_PARALLEL, ParallelMode.HYBRID_PARALLEL]:",{"type":17,"tag":25,"props":1173,"children":1174},{},[1175],{"type":23,"value":1176},"self.reducer_flag = True",{"type":17,"tag":25,"props":1178,"children":1179},{},[1180],{"type":23,"value":1181},"if self.reducer_flag:",{"type":17,"tag":25,"props":1183,"children":1184},{},[1185],{"type":23,"value":1186},"mean = context.get_auto_parallel_context(\"gradients_mean\")",{"type":17,"tag":25,"props":1188,"children":1189},{},[1190],{"type":23,"value":1191},"if auto_parallel_context().get_device_num_is_set():",{"type":17,"tag":25,"props":1193,"children":1194},{},[1195],{"type":23,"value":1196},"degree = context.get_auto_parallel_context(\"device_num\")",{"type":17,"tag":25,"props":1198,"children":1199},{},[1200],{"type":23,"value":1201},"else:",{"type":17,"tag":25,"props":1203,"children":1204},{},[1205],{"type":23,"value":1206},"degree = get_group_size()",{"type":17,"tag":25,"props":1208,"children":1209},{},[1210],{"type":23,"value":1211},"self.grad_reducer = nn.DistributedGradReducer(optimizer.parameters, mean, degree)",{"type":17,"tag":25,"props":1213,"children":1214},{},[1215],{"type":23,"value":1216},"def construct(self, *args):",{"type":17,"tag":25,"props":1218,"children":1219},{},[1220],{"type":23,"value":1221},"weights = self.weights",{"type":17,"tag":25,"props":1223,"children":1224},{},[1225],{"type":23,"value":1226},"loss = self.network(*args)",{"type":17,"tag":25,"props":1228,"children":1229},{},[1230],{"type":23,"value":1231},"sens = P.Fill()(P.DType()(loss), P.Shape()(loss), self.sens)",{"type":17,"tag":25,"props":1233,"children":1234},{},[1235],{"type":23,"value":1236},"grads = self.grad(self.network, weights)(*args, sens)",{"type":17,"tag":25,"props":1238,"children":1239},{},[1240],{"type":23,"value":1181},{"type":17,"tag":25,"props":1242,"children":1243},{},[1244],{"type":23,"value":1245},"grads = self.grad_reducer(grads)",{"type":17,"tag":25,"props":1247,"children":1248},{},[1249],{"type":23,"value":1250},"return F.depend(loss, self.optimizer(grads))",{"type":17,"tag":25,"props":1252,"children":1253},{},[1254],{"type":23,"value":1255},"class Giou(nn.Cell):",{"type":17,"tag":25,"props":1257,"children":1258},{},[1259],{"type":23,"value":1260},"\"\"\"Calculating giou\"\"\"",{"type":17,"tag":25,"props":1262,"children":1263},{},[1264],{"type":23,"value":337},{"type":17,"tag":25,"props":1266,"children":1267},{},[1268],{"type":23,"value":1269},"super(Giou, self).__init__()",{"type":17,"tag":25,"props":1271,"children":1272},{},[1273],{"type":23,"value":1274},"self.cast = P.Cast()",{"type":17,"tag":25,"props":1276,"children":1277},{},[1278],{"type":23,"value":1279},"self.reshape = P.Reshape()",{"type":17,"tag":25,"props":1281,"children":1282},{},[1283],{"type":23,"value":1284},"self.min = P.Minimum()",{"type":17,"tag":25,"props":1286,"children":1287},{},[1288],{"type":23,"value":1289},"self.max = P.Maximum()",{"type":17,"tag":25,"props":1291,"children":1292},{},[1293],{"type":23,"value":742},{"type":17,"tag":25,"props":1295,"children":1296},{},[1297],{"type":23,"value":1298},"self.mean = P.ReduceMean()",{"type":17,"tag":25,"props":1300,"children":1301},{},[1302],{"type":23,"value":1303},"self.div = P.RealDiv()",{"type":17,"tag":25,"props":1305,"children":1306},{},[1307],{"type":23,"value":1308},"self.eps = 0.000001",{"type":17,"tag":25,"props":1310,"children":1311},{},[1312],{"type":23,"value":1313},"def construct(self, box_p, box_gt):",{"type":17,"tag":25,"props":1315,"children":1316},{},[1317],{"type":23,"value":377},{"type":17,"tag":25,"props":1319,"children":1320},{},[1321],{"type":23,"value":1322},"box_p_area = (box_p[..., 2:3] - box_p[..., 0:1]) * (box_p[..., 3:4] - box_p[..., 1:2])",{"type":17,"tag":25,"props":1324,"children":1325},{},[1326],{"type":23,"value":1327},"box_gt_area = (box_gt[..., 2:3] - box_gt[..., 0:1]) * (box_gt[..., 3:4] - box_gt[..., 1:2])",{"type":17,"tag":25,"props":1329,"children":1330},{},[1331],{"type":23,"value":1332},"x_1 = self.max(box_p[..., 0:1], box_gt[..., 0:1])",{"type":17,"tag":25,"props":1334,"children":1335},{},[1336],{"type":23,"value":1337},"x_2 = self.min(box_p[..., 2:3], box_gt[..., 2:3])",{"type":17,"tag":25,"props":1339,"children":1340},{},[1341],{"type":23,"value":1342},"y_1 = self.max(box_p[..., 1:2], box_gt[..., 1:2])",{"type":17,"tag":25,"props":1344,"children":1345},{},[1346],{"type":23,"value":1347},"y_2 = self.min(box_p[..., 3:4], box_gt[..., 3:4])",{"type":17,"tag":25,"props":1349,"children":1350},{},[1351],{"type":23,"value":1352},"intersection = (y_2 - y_1) * (x_2 - x_1)",{"type":17,"tag":25,"props":1354,"children":1355},{},[1356],{"type":23,"value":1357},"xc_1 = self.min(box_p[..., 0:1], box_gt[..., 0:1])",{"type":17,"tag":25,"props":1359,"children":1360},{},[1361],{"type":23,"value":1362},"xc_2 = self.max(box_p[..., 2:3], box_gt[..., 2:3])",{"type":17,"tag":25,"props":1364,"children":1365},{},[1366],{"type":23,"value":1367},"yc_1 = self.min(box_p[..., 1:2], box_gt[..., 1:2])",{"type":17,"tag":25,"props":1369,"children":1370},{},[1371],{"type":23,"value":1372},"yc_2 = self.max(box_p[..., 3:4], box_gt[..., 3:4])",{"type":17,"tag":25,"props":1374,"children":1375},{},[1376],{"type":23,"value":1377},"c_area = (xc_2 - xc_1) * (yc_2 - yc_1)",{"type":17,"tag":25,"props":1379,"children":1380},{},[1381],{"type":23,"value":1382},"union = box_p_area + box_gt_area - intersection",{"type":17,"tag":25,"props":1384,"children":1385},{},[1386],{"type":23,"value":1387},"union = union + self.eps",{"type":17,"tag":25,"props":1389,"children":1390},{},[1391],{"type":23,"value":1392},"c_area = c_area + self.eps",{"type":17,"tag":25,"props":1394,"children":1395},{},[1396],{"type":23,"value":1397},"iou = self.div(self.cast(intersection, ms.float32), self.cast(union, ms.float32))",{"type":17,"tag":25,"props":1399,"children":1400},{},[1401],{"type":23,"value":1402},"res_mid0 = c_area - union",{"type":17,"tag":25,"props":1404,"children":1405},{},[1406],{"type":23,"value":1407},"res_mid1 = self.div(self.cast(res_mid0, ms.float32), self.cast(c_area, ms.float32))",{"type":17,"tag":25,"props":1409,"children":1410},{},[1411],{"type":23,"value":1412},"giou = iou - res_mid1",{"type":17,"tag":25,"props":1414,"children":1415},{},[1416],{"type":23,"value":1417},"giou = C.clip_by_value(giou, -1.0, 1.0)",{"type":17,"tag":25,"props":1419,"children":1420},{},[1421],{"type":23,"value":1422},"return giou",{"type":17,"tag":25,"props":1424,"children":1425},{},[1426],{"type":23,"value":1427},"class Iou(nn.Cell):",{"type":17,"tag":25,"props":1429,"children":1430},{},[1431],{"type":23,"value":1432},"\"\"\"Calculate the iou of boxes\"\"\"",{"type":17,"tag":25,"props":1434,"children":1435},{},[1436],{"type":23,"value":337},{"type":17,"tag":25,"props":1438,"children":1439},{},[1440],{"type":23,"value":1441},"super(Iou, self).__init__()",{"type":17,"tag":25,"props":1443,"children":1444},{},[1445],{"type":23,"value":1284},{"type":17,"tag":25,"props":1447,"children":1448},{},[1449],{"type":23,"value":1289},{"type":17,"tag":25,"props":1451,"children":1452},{},[1453],{"type":23,"value":1454},"def construct(self, box1, box2):",{"type":17,"tag":25,"props":1456,"children":1457},{},[1458],{"type":23,"value":528},{"type":17,"tag":25,"props":1460,"children":1461},{},[1462],{"type":23,"value":1463},"box1: pred_box [batch, gx, gy, anchors, 1, 4] ->4: [x_center, y_center, w, h]",{"type":17,"tag":25,"props":1465,"children":1466},{},[1467],{"type":23,"value":1468},"box2: gt_box [batch, 1, 1, 1, maxbox, 4]",{"type":17,"tag":25,"props":1470,"children":1471},{},[1472],{"type":23,"value":1473},"convert to topLeft and rightDown",{"type":17,"tag":25,"props":1475,"children":1476},{},[1477],{"type":23,"value":528},{"type":17,"tag":25,"props":1479,"children":1480},{},[1481],{"type":23,"value":1482},"box1_xy = box1[:, :, :, :, :, :2]",{"type":17,"tag":25,"props":1484,"children":1485},{},[1486],{"type":23,"value":1487},"box1_wh = box1[:, :, :, :, :, 2:4]",{"type":17,"tag":25,"props":1489,"children":1490},{},[1491],{"type":23,"value":1492},"box1_mins = box1_xy - box1_wh / F.scalar_to_array(2.0) # topLeft",{"type":17,"tag":25,"props":1494,"children":1495},{},[1496],{"type":23,"value":1497},"box1_maxs = box1_xy + box1_wh / F.scalar_to_array(2.0) # rightDown",{"type":17,"tag":25,"props":1499,"children":1500},{},[1501],{"type":23,"value":1502},"box2_xy = box2[:, :, :, :, :, :2]",{"type":17,"tag":25,"props":1504,"children":1505},{},[1506],{"type":23,"value":1507},"box2_wh = box2[:, :, :, :, :, 2:4]",{"type":17,"tag":25,"props":1509,"children":1510},{},[1511],{"type":23,"value":1512},"box2_mins = box2_xy - box2_wh / F.scalar_to_array(2.0)",{"type":17,"tag":25,"props":1514,"children":1515},{},[1516],{"type":23,"value":1517},"box2_maxs = box2_xy + box2_wh / F.scalar_to_array(2.0)",{"type":17,"tag":25,"props":1519,"children":1520},{},[1521],{"type":23,"value":1522},"intersect_mins = self.max(box1_mins, box2_mins)",{"type":17,"tag":25,"props":1524,"children":1525},{},[1526],{"type":23,"value":1527},"intersect_maxs = self.min(box1_maxs, box2_maxs)",{"type":17,"tag":25,"props":1529,"children":1530},{},[1531],{"type":23,"value":1532},"intersect_wh = self.max(intersect_maxs - intersect_mins, F.scalar_to_array(0.0))",{"type":17,"tag":25,"props":1534,"children":1535},{},[1536],{"type":23,"value":1537},"# P.squeeze: for effiecient slice",{"type":17,"tag":25,"props":1539,"children":1540},{},[1541],{"type":23,"value":1542},"intersect_area = P.Squeeze(-1)(intersect_wh[:, :, :, :, :, 0:1]) * \\",{"type":17,"tag":25,"props":1544,"children":1545},{},[1546],{"type":23,"value":1547},"P.Squeeze(-1)(intersect_wh[:, :, :, :, :, 1:2])",{"type":17,"tag":25,"props":1549,"children":1550},{},[1551],{"type":23,"value":1552},"box1_area = P.Squeeze(-1)(box1_wh[:, :, :, :, :, 0:1]) * P.Squeeze(-1)(box1_wh[:, :, :, :, :, 1:2])",{"type":17,"tag":25,"props":1554,"children":1555},{},[1556],{"type":23,"value":1557},"box2_area = P.Squeeze(-1)(box2_wh[:, :, :, :, :, 0:1]) * P.Squeeze(-1)(box2_wh[:, :, :, :, :, 1:2])",{"type":17,"tag":25,"props":1559,"children":1560},{},[1561],{"type":23,"value":1562},"iou = intersect_area / (box1_area + box2_area - intersect_area)",{"type":17,"tag":25,"props":1564,"children":1565},{},[1566],{"type":23,"value":1567},"# iou : [batch, gx, gy, anchors, maxboxes]",{"type":17,"tag":25,"props":1569,"children":1570},{},[1571],{"type":23,"value":1572},"return iou",{"type":17,"tag":25,"props":1574,"children":1575},{},[1576],{"type":23,"value":1577},"class YoloLossBlock(nn.Cell):",{"type":17,"tag":25,"props":1579,"children":1580},{},[1581],{"type":23,"value":528},{"type":17,"tag":25,"props":1583,"children":1584},{},[1585],{"type":23,"value":1586},"Loss block cell of YOLOV5 network.",{"type":17,"tag":25,"props":1588,"children":1589},{},[1590],{"type":23,"value":528},{"type":17,"tag":25,"props":1592,"children":1593},{},[1594],{"type":23,"value":1595},"def __init__(self, scale, config=ConfigYOLOV5()):",{"type":17,"tag":25,"props":1597,"children":1598},{},[1599],{"type":23,"value":1600},"super(YoloLossBlock, self).__init__()",{"type":17,"tag":25,"props":1602,"children":1603},{},[1604],{"type":23,"value":1605},"self.config = config",{"type":17,"tag":25,"props":1607,"children":1608},{},[1609],{"type":23,"value":1610},"if scale == 's':",{"type":17,"tag":25,"props":1612,"children":1613},{},[1614],{"type":23,"value":1615},"# anchor mask",{"type":17,"tag":25,"props":1617,"children":1618},{},[1619],{"type":23,"value":1620},"idx = (0, 1, 2)",{"type":17,"tag":25,"props":1622,"children":1623},{},[1624],{"type":23,"value":1625},"elif scale == 'm':",{"type":17,"tag":25,"props":1627,"children":1628},{},[1629],{"type":23,"value":1630},"idx = (3, 4, 5)",{"type":17,"tag":25,"props":1632,"children":1633},{},[1634],{"type":23,"value":1635},"elif scale == 'l':",{"type":17,"tag":25,"props":1637,"children":1638},{},[1639],{"type":23,"value":1640},"idx = (6, 7, 8)",{"type":17,"tag":25,"props":1642,"children":1643},{},[1644],{"type":23,"value":1201},{"type":17,"tag":25,"props":1646,"children":1647},{},[1648],{"type":23,"value":1649},"raise KeyError(\"Invalid scale value for DetectionBlock\")",{"type":17,"tag":25,"props":1651,"children":1652},{},[1653],{"type":23,"value":1654},"self.anchors = Tensor([self.config.anchor_scales[i] for i in idx], ms.float32)",{"type":17,"tag":25,"props":1656,"children":1657},{},[1658],{"type":23,"value":1659},"self.ignore_threshold = Tensor(self.config.ignore_threshold, ms.float32)",{"type":17,"tag":25,"props":1661,"children":1662},{},[1663],{"type":23,"value":1664},"self.concat = P.Concat(axis=-1)",{"type":17,"tag":25,"props":1666,"children":1667},{},[1668],{"type":23,"value":1669},"self.iou = Iou()",{"type":17,"tag":25,"props":1671,"children":1672},{},[1673],{"type":23,"value":1674},"self.reduce_max = P.ReduceMax(keep_dims=False)",{"type":17,"tag":25,"props":1676,"children":1677},{},[1678],{"type":23,"value":1679},"self.confidence_loss = ConfidenceLoss()",{"type":17,"tag":25,"props":1681,"children":1682},{},[1683],{"type":23,"value":1684},"self.class_loss = ClassLoss()",{"type":17,"tag":25,"props":1686,"children":1687},{},[1688],{"type":23,"value":1689},"self.reduce_sum = P.ReduceSum()",{"type":17,"tag":25,"props":1691,"children":1692},{},[1693],{"type":23,"value":1694},"self.giou = Giou()",{"type":17,"tag":25,"props":1696,"children":1697},{},[1698],{"type":23,"value":1699},"def construct(self, prediction, pred_xy, pred_wh, y_true, gt_box, input_shape):",{"type":17,"tag":25,"props":1701,"children":1702},{},[1703],{"type":23,"value":528},{"type":17,"tag":25,"props":1705,"children":1706},{},[1707],{"type":23,"value":1708},"prediction : origin output from yolo",{"type":17,"tag":25,"props":1710,"children":1711},{},[1712],{"type":23,"value":1713},"pred_xy: (sigmoid(xy)+grid)/grid_size",{"type":17,"tag":25,"props":1715,"children":1716},{},[1717],{"type":23,"value":1718},"pred_wh: (exp(wh)*anchors)/input_shape",{"type":17,"tag":25,"props":1720,"children":1721},{},[1722],{"type":23,"value":1723},"y_true : after normalize",{"type":17,"tag":25,"props":1725,"children":1726},{},[1727],{"type":23,"value":1728},"gt_box: [batch, maxboxes, xyhw] after normalize",{"type":17,"tag":25,"props":1730,"children":1731},{},[1732],{"type":23,"value":528},{"type":17,"tag":25,"props":1734,"children":1735},{},[1736],{"type":23,"value":1737},"object_mask = y_true[:, :, :, :, 4:5]",{"type":17,"tag":25,"props":1739,"children":1740},{},[1741],{"type":23,"value":1742},"class_probs = y_true[:, :, :, :, 5:]",{"type":17,"tag":25,"props":1744,"children":1745},{},[1746],{"type":23,"value":1747},"true_boxes = y_true[:, :, :, :, :4]",{"type":17,"tag":25,"props":1749,"children":1750},{},[1751],{"type":23,"value":1752},"grid_shape = P.Shape()(prediction)[1:3]",{"type":17,"tag":25,"props":1754,"children":1755},{},[1756],{"type":23,"value":1757},"grid_shape = P.Cast()(F.tuple_to_array(grid_shape[::-1]), ms.float32)",{"type":17,"tag":25,"props":1759,"children":1760},{},[1761],{"type":23,"value":1762},"pred_boxes = self.concat((pred_xy, pred_wh))",{"type":17,"tag":25,"props":1764,"children":1765},{},[1766],{"type":23,"value":1767},"true_wh = y_true[:, :, :, :, 2:4]",{"type":17,"tag":25,"props":1769,"children":1770},{},[1771],{"type":23,"value":1772},"true_wh = P.Select()(P.Equal()(true_wh, 0.0),",{"type":17,"tag":25,"props":1774,"children":1775},{},[1776],{"type":23,"value":1777},"P.Fill()(P.DType()(true_wh),",{"type":17,"tag":25,"props":1779,"children":1780},{},[1781],{"type":23,"value":1782},"P.Shape()(true_wh), 1.0),",{"type":17,"tag":25,"props":1784,"children":1785},{},[1786],{"type":23,"value":1787},"true_wh)",{"type":17,"tag":25,"props":1789,"children":1790},{},[1791],{"type":23,"value":1792},"true_wh = P.Log()(true_wh / self.anchors * input_shape)",{"type":17,"tag":25,"props":1794,"children":1795},{},[1796],{"type":23,"value":1797},"# 2-w*h for large picture, use small scale, since small obj need more precise",{"type":17,"tag":25,"props":1799,"children":1800},{},[1801],{"type":23,"value":1802},"box_loss_scale = 2 - y_true[:, :, :, :, 2:3] * y_true[:, :, :, :, 3:4]",{"type":17,"tag":25,"props":1804,"children":1805},{},[1806],{"type":23,"value":1807},"gt_shape = P.Shape()(gt_box)",{"type":17,"tag":25,"props":1809,"children":1810},{},[1811],{"type":23,"value":1812},"gt_box = P.Reshape()(gt_box, (gt_shape[0], 1, 1, 1, gt_shape[1], gt_shape[2]))",{"type":17,"tag":25,"props":1814,"children":1815},{},[1816],{"type":23,"value":1817},"# add one more dimension for broadcast",{"type":17,"tag":25,"props":1819,"children":1820},{},[1821],{"type":23,"value":1822},"iou = self.iou(P.ExpandDims()(pred_boxes, -2), gt_box)",{"type":17,"tag":25,"props":1824,"children":1825},{},[1826],{"type":23,"value":1827},"# gt_box is x,y,h,w after normalize",{"type":17,"tag":25,"props":1829,"children":1830},{},[1831],{"type":23,"value":1832},"# [batch, grid[0], grid[1], num_anchor, num_gt]",{"type":17,"tag":25,"props":1834,"children":1835},{},[1836],{"type":23,"value":1837},"best_iou = self.reduce_max(iou, -1)",{"type":17,"tag":25,"props":1839,"children":1840},{},[1841],{"type":23,"value":1842},"# [batch, grid[0], grid[1], num_anchor]",{"type":17,"tag":25,"props":1844,"children":1845},{},[1846],{"type":23,"value":1847},"# ignore_mask IOU too small",{"type":17,"tag":25,"props":1849,"children":1850},{},[1851],{"type":23,"value":1852},"ignore_mask = best_iou \u003C self.ignore_threshold",{"type":17,"tag":25,"props":1854,"children":1855},{},[1856],{"type":23,"value":1857},"ignore_mask = P.Cast()(ignore_mask, ms.float32)",{"type":17,"tag":25,"props":1859,"children":1860},{},[1861],{"type":23,"value":1862},"ignore_mask = P.ExpandDims()(ignore_mask, -1)",{"type":17,"tag":25,"props":1864,"children":1865},{},[1866],{"type":23,"value":1867},"# ignore_mask backpro will cause a lot maximunGrad and minimumGrad time consume.",{"type":17,"tag":25,"props":1869,"children":1870},{},[1871],{"type":23,"value":1872},"# so we turn off its gradient",{"type":17,"tag":25,"props":1874,"children":1875},{},[1876],{"type":23,"value":1877},"ignore_mask = F.stop_gradient(ignore_mask)",{"type":17,"tag":25,"props":1879,"children":1880},{},[1881],{"type":23,"value":1882},"confidence_loss = self.confidence_loss(object_mask, prediction[:, :, :, :, 4:5], ignore_mask)",{"type":17,"tag":25,"props":1884,"children":1885},{},[1886],{"type":23,"value":1887},"class_loss = self.class_loss(object_mask, prediction[:, :, :, :, 5:], class_probs)",{"type":17,"tag":25,"props":1889,"children":1890},{},[1891],{"type":23,"value":1892},"object_mask_me = P.Reshape()(object_mask, (-1, 1)) # [8, 72, 72, 3, 1]",{"type":17,"tag":25,"props":1894,"children":1895},{},[1896],{"type":23,"value":1897},"box_loss_scale_me = P.Reshape()(box_loss_scale, (-1, 1))",{"type":17,"tag":25,"props":1899,"children":1900},{},[1901],{"type":23,"value":1902},"pred_boxes_me = xywh2x1y1x2y2(pred_boxes)",{"type":17,"tag":25,"props":1904,"children":1905},{},[1906],{"type":23,"value":1907},"pred_boxes_me = P.Reshape()(pred_boxes_me, (-1, 4))",{"type":17,"tag":25,"props":1909,"children":1910},{},[1911],{"type":23,"value":1912},"true_boxes_me = xywh2x1y1x2y2(true_boxes)",{"type":17,"tag":25,"props":1914,"children":1915},{},[1916],{"type":23,"value":1917},"true_boxes_me = P.Reshape()(true_boxes_me, (-1, 4))",{"type":17,"tag":25,"props":1919,"children":1920},{},[1921],{"type":23,"value":1922},"ciou = self.giou(pred_boxes_me, true_boxes_me)",{"type":17,"tag":25,"props":1924,"children":1925},{},[1926],{"type":23,"value":1927},"ciou_loss = object_mask_me * box_loss_scale_me * (1 - ciou)",{"type":17,"tag":25,"props":1929,"children":1930},{},[1931],{"type":23,"value":1932},"ciou_loss_me = self.reduce_sum(ciou_loss, ())",{"type":17,"tag":25,"props":1934,"children":1935},{},[1936],{"type":23,"value":1937},"loss = ciou_loss_me * 4 + confidence_loss + class_loss",{"type":17,"tag":25,"props":1939,"children":1940},{},[1941],{"type":23,"value":1942},"batch_size = P.Shape()(prediction)[0]",{"type":17,"tag":25,"props":1944,"children":1945},{},[1946],{"type":23,"value":1947},"return loss / batch_size",{"type":17,"tag":25,"props":1949,"children":1950},{},[1951],{"type":17,"tag":84,"props":1952,"children":1953},{},[1954],{"type":23,"value":1955},"Trainer design module:",{"type":17,"tag":25,"props":1957,"children":1958},{},[1959],{"type":23,"value":1960},"def linear_warmup_lr(current_step, warmup_steps, base_lr, init_lr):",{"type":17,"tag":25,"props":1962,"children":1963},{},[1964],{"type":23,"value":1965},"\"\"\"Linear learning rate.\"\"\"",{"type":17,"tag":25,"props":1967,"children":1968},{},[1969],{"type":23,"value":1970},"lr_inc = (float(base_lr) - float(init_lr)) / float(warmup_steps)",{"type":17,"tag":25,"props":1972,"children":1973},{},[1974],{"type":23,"value":1975},"lr = float(init_lr) + lr_inc * current_step",{"type":17,"tag":25,"props":1977,"children":1978},{},[1979],{"type":23,"value":1980},"return lr",{"type":17,"tag":25,"props":1982,"children":1983},{},[1984],{"type":23,"value":1985},"def warmup_step_lr(lr, lr_epochs, steps_per_epoch, warmup_epochs, max_epoch, gamma=0.1):",{"type":17,"tag":25,"props":1987,"children":1988},{},[1989],{"type":23,"value":1990},"\"\"\"Warmup step learning rate.\"\"\"",{"type":17,"tag":25,"props":1992,"children":1993},{},[1994],{"type":23,"value":1995},"base_lr = lr",{"type":17,"tag":25,"props":1997,"children":1998},{},[1999],{"type":23,"value":2000},"warmup_init_lr = 0",{"type":17,"tag":25,"props":2002,"children":2003},{},[2004],{"type":23,"value":2005},"total_steps = int(max_epoch * steps_per_epoch)",{"type":17,"tag":25,"props":2007,"children":2008},{},[2009],{"type":23,"value":2010},"warmup_steps = int(warmup_epochs * steps_per_epoch)",{"type":17,"tag":25,"props":2012,"children":2013},{},[2014],{"type":23,"value":2015},"milestones = lr_epochs",{"type":17,"tag":25,"props":2017,"children":2018},{},[2019],{"type":23,"value":2020},"milestones_steps = []",{"type":17,"tag":25,"props":2022,"children":2023},{},[2024],{"type":23,"value":2025},"for milestone in milestones:",{"type":17,"tag":25,"props":2027,"children":2028},{},[2029],{"type":23,"value":2030},"milestones_step = milestone * steps_per_epoch",{"type":17,"tag":25,"props":2032,"children":2033},{},[2034],{"type":23,"value":2035},"milestones_steps.append(milestones_step)",{"type":17,"tag":25,"props":2037,"children":2038},{},[2039],{"type":23,"value":2040},"lr_each_step = []",{"type":17,"tag":25,"props":2042,"children":2043},{},[2044],{"type":23,"value":2045},"lr = base_lr",{"type":17,"tag":25,"props":2047,"children":2048},{},[2049],{"type":23,"value":2050},"milestones_steps_counter = Counter(milestones_steps)",{"type":17,"tag":25,"props":2052,"children":2053},{},[2054],{"type":23,"value":2055},"for i in range(total_steps):",{"type":17,"tag":25,"props":2057,"children":2058},{},[2059],{"type":23,"value":2060},"if i \u003C warmup_steps:",{"type":17,"tag":25,"props":2062,"children":2063},{},[2064],{"type":23,"value":2065},"lr = linear_warmup_lr(i + 1, warmup_steps, base_lr, warmup_init_lr)",{"type":17,"tag":25,"props":2067,"children":2068},{},[2069],{"type":23,"value":1201},{"type":17,"tag":25,"props":2071,"children":2072},{},[2073],{"type":23,"value":2074},"lr = lr * gamma**milestones_steps_counter[i]",{"type":17,"tag":25,"props":2076,"children":2077},{},[2078],{"type":23,"value":2079},"lr_each_step.append(lr)",{"type":17,"tag":25,"props":2081,"children":2082},{},[2083],{"type":23,"value":2084},"return np.array(lr_each_step).astype(np.float32)",{"type":17,"tag":25,"props":2086,"children":2087},{},[2088],{"type":23,"value":2089},"def multi_step_lr(lr, milestones, steps_per_epoch, max_epoch, gamma=0.1):",{"type":17,"tag":25,"props":2091,"children":2092},{},[2093],{"type":23,"value":2094},"return warmup_step_lr(lr, milestones, steps_per_epoch, 0, max_epoch, gamma=gamma)",{"type":17,"tag":25,"props":2096,"children":2097},{},[2098],{"type":23,"value":2099},"def step_lr(lr, epoch_size, steps_per_epoch, max_epoch, gamma=0.1):",{"type":17,"tag":25,"props":2101,"children":2102},{},[2103],{"type":23,"value":2104},"lr_epochs = []",{"type":17,"tag":25,"props":2106,"children":2107},{},[2108],{"type":23,"value":2109},"for i in range(1, max_epoch):",{"type":17,"tag":25,"props":2111,"children":2112},{},[2113],{"type":23,"value":2114},"if i % epoch_size == 0:",{"type":17,"tag":25,"props":2116,"children":2117},{},[2118],{"type":23,"value":2119},"lr_epochs.append(i)",{"type":17,"tag":25,"props":2121,"children":2122},{},[2123],{"type":23,"value":2124},"return multi_step_lr(lr, lr_epochs, steps_per_epoch, max_epoch, gamma=gamma)",{"type":17,"tag":25,"props":2126,"children":2127},{},[2128],{"type":23,"value":2129},"def warmup_cosine_annealing_lr(lr, steps_per_epoch, warmup_epochs, max_epoch, T_max, eta_min=0):",{"type":17,"tag":25,"props":2131,"children":2132},{},[2133],{"type":23,"value":2134},"\"\"\"Cosine annealing learning rate.\"\"\"",{"type":17,"tag":25,"props":2136,"children":2137},{},[2138],{"type":23,"value":1995},{"type":17,"tag":25,"props":2140,"children":2141},{},[2142],{"type":23,"value":2000},{"type":17,"tag":25,"props":2144,"children":2145},{},[2146],{"type":23,"value":2005},{"type":17,"tag":25,"props":2148,"children":2149},{},[2150],{"type":23,"value":2010},{"type":17,"tag":25,"props":2152,"children":2153},{},[2154],{"type":23,"value":2040},{"type":17,"tag":25,"props":2156,"children":2157},{},[2158],{"type":23,"value":2055},{"type":17,"tag":25,"props":2160,"children":2161},{},[2162],{"type":23,"value":2163},"last_epoch = i // steps_per_epoch",{"type":17,"tag":25,"props":2165,"children":2166},{},[2167],{"type":23,"value":2060},{"type":17,"tag":25,"props":2169,"children":2170},{},[2171],{"type":23,"value":2065},{"type":17,"tag":25,"props":2173,"children":2174},{},[2175],{"type":23,"value":1201},{"type":17,"tag":25,"props":2177,"children":2178},{},[2179],{"type":23,"value":2180},"lr = eta_min + (base_lr - eta_min) * (1. + math.cos(math.pi*last_epoch / T_max)) / 2",{"type":17,"tag":25,"props":2182,"children":2183},{},[2184],{"type":23,"value":2079},{"type":17,"tag":25,"props":2186,"children":2187},{},[2188],{"type":23,"value":2084},{"type":17,"tag":25,"props":2190,"children":2191},{},[2192],{"type":23,"value":2193},"def warmup_cosine_annealing_lr_V2(lr, steps_per_epoch, warmup_epochs, max_epoch, T_max, eta_min=0):",{"type":17,"tag":25,"props":2195,"children":2196},{},[2197],{"type":23,"value":2198},"\"\"\"Cosine annealing learning rate V2.\"\"\"",{"type":17,"tag":25,"props":2200,"children":2201},{},[2202],{"type":23,"value":1995},{"type":17,"tag":25,"props":2204,"children":2205},{},[2206],{"type":23,"value":2000},{"type":17,"tag":25,"props":2208,"children":2209},{},[2210],{"type":23,"value":2005},{"type":17,"tag":25,"props":2212,"children":2213},{},[2214],{"type":23,"value":2010},{"type":17,"tag":25,"props":2216,"children":2217},{},[2218],{"type":23,"value":2219},"last_lr = 0",{"type":17,"tag":25,"props":2221,"children":2222},{},[2223],{"type":23,"value":2224},"last_epoch_V1 = 0",{"type":17,"tag":25,"props":2226,"children":2227},{},[2228],{"type":23,"value":2229},"T_max_V2 = int(max_epoch*1/3)",{"type":17,"tag":25,"props":2231,"children":2232},{},[2233],{"type":23,"value":2040},{"type":17,"tag":25,"props":2235,"children":2236},{},[2237],{"type":23,"value":2055},{"type":17,"tag":25,"props":2239,"children":2240},{},[2241],{"type":23,"value":2163},{"type":17,"tag":25,"props":2243,"children":2244},{},[2245],{"type":23,"value":2060},{"type":17,"tag":25,"props":2247,"children":2248},{},[2249],{"type":23,"value":2065},{"type":17,"tag":25,"props":2251,"children":2252},{},[2253],{"type":23,"value":1201},{"type":17,"tag":25,"props":2255,"children":2256},{},[2257],{"type":23,"value":2258},"if i \u003C total_steps*2/3:",{"type":17,"tag":25,"props":2260,"children":2261},{},[2262],{"type":23,"value":2180},{"type":17,"tag":25,"props":2264,"children":2265},{},[2266],{"type":23,"value":2267},"last_lr = lr",{"type":17,"tag":25,"props":2269,"children":2270},{},[2271],{"type":23,"value":2272},"last_epoch_V1 = last_epoch",{"type":17,"tag":25,"props":2274,"children":2275},{},[2276],{"type":23,"value":1201},{"type":17,"tag":25,"props":2278,"children":2279},{},[2280],{"type":23,"value":2281},"base_lr = last_lr",{"type":17,"tag":25,"props":2283,"children":2284},{},[2285],{"type":23,"value":2286},"last_epoch = last_epoch-last_epoch_V1",{"type":17,"tag":25,"props":2288,"children":2289},{},[2290],{"type":23,"value":2291},"lr = eta_min + (base_lr - eta_min) * (1. + math.cos(math.pi * last_epoch / T_max_V2)) / 2",{"type":17,"tag":25,"props":2293,"children":2294},{},[2295],{"type":23,"value":2079},{"type":17,"tag":25,"props":2297,"children":2298},{},[2299],{"type":23,"value":2084},{"type":17,"tag":25,"props":2301,"children":2302},{},[2303],{"type":23,"value":2304},"def warmup_cosine_annealing_lr_sample(lr, steps_per_epoch, warmup_epochs, max_epoch, T_max, eta_min=0):",{"type":17,"tag":25,"props":2306,"children":2307},{},[2308],{"type":23,"value":2309},"\"\"\"Warmup cosine annealing learning rate.\"\"\"",{"type":17,"tag":25,"props":2311,"children":2312},{},[2313],{"type":23,"value":2314},"start_sample_epoch = 60",{"type":17,"tag":25,"props":2316,"children":2317},{},[2318],{"type":23,"value":2319},"step_sample = 2",{"type":17,"tag":25,"props":2321,"children":2322},{},[2323],{"type":23,"value":2324},"tobe_sampled_epoch = 60",{"type":17,"tag":25,"props":2326,"children":2327},{},[2328],{"type":23,"value":2329},"end_sampled_epoch = start_sample_epoch + step_sample*tobe_sampled_epoch",{"type":17,"tag":25,"props":2331,"children":2332},{},[2333],{"type":23,"value":2334},"max_sampled_epoch = max_epoch+tobe_sampled_epoch",{"type":17,"tag":25,"props":2336,"children":2337},{},[2338],{"type":23,"value":2339},"T_max = max_sampled_epoch",{"type":17,"tag":25,"props":2341,"children":2342},{},[2343],{"type":23,"value":1995},{"type":17,"tag":25,"props":2345,"children":2346},{},[2347],{"type":23,"value":2000},{"type":17,"tag":25,"props":2349,"children":2350},{},[2351],{"type":23,"value":2005},{"type":17,"tag":25,"props":2353,"children":2354},{},[2355],{"type":23,"value":2356},"total_sampled_steps = int(max_sampled_epoch * steps_per_epoch)",{"type":17,"tag":25,"props":2358,"children":2359},{},[2360],{"type":23,"value":2010},{"type":17,"tag":25,"props":2362,"children":2363},{},[2364],{"type":23,"value":2040},{"type":17,"tag":25,"props":2366,"children":2367},{},[2368],{"type":23,"value":2369},"for i in range(total_sampled_steps):",{"type":17,"tag":25,"props":2371,"children":2372},{},[2373],{"type":23,"value":2163},{"type":17,"tag":25,"props":2375,"children":2376},{},[2377],{"type":23,"value":2378},"if last_epoch in range(start_sample_epoch, end_sampled_epoch, step_sample):",{"type":17,"tag":25,"props":2380,"children":2381},{},[2382],{"type":23,"value":2383},"continue",{"type":17,"tag":25,"props":2385,"children":2386},{},[2387],{"type":23,"value":2060},{"type":17,"tag":25,"props":2389,"children":2390},{},[2391],{"type":23,"value":2065},{"type":17,"tag":25,"props":2393,"children":2394},{},[2395],{"type":23,"value":1201},{"type":17,"tag":25,"props":2397,"children":2398},{},[2399],{"type":23,"value":2180},{"type":17,"tag":25,"props":2401,"children":2402},{},[2403],{"type":23,"value":2079},{"type":17,"tag":25,"props":2405,"children":2406},{},[2407],{"type":23,"value":2408},"assert total_steps == len(lr_each_step)",{"type":17,"tag":25,"props":2410,"children":2411},{},[2412],{"type":23,"value":2084},{"type":17,"tag":25,"props":2414,"children":2415},{},[2416],{"type":23,"value":2417},"def get_lr(args):",{"type":17,"tag":25,"props":2419,"children":2420},{},[2421],{"type":23,"value":2422},"\"\"\"generate learning rate.\"\"\"",{"type":17,"tag":25,"props":2424,"children":2425},{},[2426],{"type":23,"value":2427},"if args.lr_scheduler == 'exponential':",{"type":17,"tag":25,"props":2429,"children":2430},{},[2431],{"type":23,"value":2432},"lr = warmup_step_lr(args.lr,",{"type":17,"tag":25,"props":2434,"children":2435},{},[2436],{"type":23,"value":2437},"args.lr_epochs,",{"type":17,"tag":25,"props":2439,"children":2440},{},[2441],{"type":23,"value":2442},"args.steps_per_epoch,",{"type":17,"tag":25,"props":2444,"children":2445},{},[2446],{"type":23,"value":2447},"args.warmup_epochs,",{"type":17,"tag":25,"props":2449,"children":2450},{},[2451],{"type":23,"value":2452},"args.max_epoch,",{"type":17,"tag":25,"props":2454,"children":2455},{},[2456],{"type":23,"value":2457},"gamma=args.lr_gamma,",{"type":17,"tag":25,"props":2459,"children":2460},{},[2461],{"type":23,"value":2462},")",{"type":17,"tag":25,"props":2464,"children":2465},{},[2466],{"type":23,"value":2467},"elif args.lr_scheduler == 'cosine_annealing':",{"type":17,"tag":25,"props":2469,"children":2470},{},[2471],{"type":23,"value":2472},"lr = warmup_cosine_annealing_lr(args.lr, args.steps_per_epoch, args.warmup_epochs, args.max_epoch, args.T_max, args.eta_min)",{"type":17,"tag":25,"props":2474,"children":2475},{},[2476],{"type":23,"value":2477},"elif args.lr_scheduler == 'cosine_annealing_V2':",{"type":17,"tag":25,"props":2479,"children":2480},{},[2481],{"type":23,"value":2482},"lr = warmup_cosine_annealing_lr_V2(args.lr, args.steps_per_epoch, args.warmup_epochs, args.max_epoch, args.T_max, args.eta_min)",{"type":17,"tag":25,"props":2484,"children":2485},{},[2486],{"type":23,"value":2487},"elif args.lr_scheduler == 'cosine_annealing_sample':",{"type":17,"tag":25,"props":2489,"children":2490},{},[2491],{"type":23,"value":2492},"lr = warmup_cosine_annealing_lr_sample(args.lr, args.steps_per_epoch, args.warmup_epochs, args.max_epoch, args.T_max, args.eta_min)",{"type":17,"tag":25,"props":2494,"children":2495},{},[2496],{"type":23,"value":1201},{"type":17,"tag":25,"props":2498,"children":2499},{},[2500],{"type":23,"value":2501},"raise NotImplementedError(args.lr_scheduler)",{"type":17,"tag":25,"props":2503,"children":2504},{},[2505],{"type":23,"value":1980},{"type":17,"tag":25,"props":2507,"children":2508},{},[2509],{"type":17,"tag":84,"props":2510,"children":2511},{},[2512],{"type":23,"value":2513},"06 Conclusion and Outlook",{"type":17,"tag":25,"props":2515,"children":2516},{},[2517],{"type":23,"value":2518},"In this paper, a novel edge-cloud collaborative training method for privacy protection is proposed. Different from previous methods that require frequent communication between edge devices and cloud devices, MistNet only needs to upload intermediate features from the edge to the cloud once during training, significantly reducing the communication volume transmitted between the edge and cloud. By quantifying, adding noise to, compressing and disturbing the representation data, the method presented in this paper makes it more difficult to infer the original data from the representation data on the cloud, thereby increasing the level of privacy protection for the data.",{"type":17,"tag":25,"props":2520,"children":2521},{},[2522],{"type":23,"value":2523},"In addition, the first several layers of the model are used as a feature extractor after the pre-trained model is segmented, thereby reducing computing workloads on the client. The MistNet algorithm further alleviates the defects of federated learning algorithms such as FedAvg. Nevertheless, new Federated learning-based algorithms that require low communication volume, strong privacy protection, and minimal edge computing workloads, are certainly worth further exploration and research.",{"title":7,"searchDepth":2525,"depth":2525,"links":2526},4,[],"markdown","content:news:en:2764.md","content","news/en/2764.md","news/en/2764","md",1776506045882]