[{"data":1,"prerenderedAt":869},["ShallowReactive",2],{"content-query-1RvcH7trtN":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":863,"_id":864,"_source":865,"_file":866,"_stem":867,"_extension":868},"/technology-blogs/zh/1982","zh",false,"","一种在大规模细粒度图像检索中学习属性感知哈希编码的方法","通过MindSpore框架完成细粒度哈希检索的任务。具体而言，在大规模图像检索要求下，如果通过实值检索，计算复杂度较高，而如果使用简短的二值哈希编码替代原有的实值编码进行图像检索，检索效率就可以大大提升。而现有的哈希方法中，其哈希值没有实际意义，因此希望通过属性提取的方式使得生成的哈希编码具有实际的特征含义。","2022-12-01","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/12/07/1a32a2242fc944edb327f17db573838c.png","technology-blogs","开发者分享",{"type":15,"children":16,"toc":860},"root",[17,24,30,39,44,55,63,68,73,81,92,97,102,117,135,147,152,164,183,188,217,225,291,299,310,331,368,474,479,484,504,509,514,526,531,536,548,556,561,566,576,581,588,593,606,614,619,624,632,637,642,647,652,657,662,667,672,677,682,687,692,697,702,707,712,717,722,727,732,737,742,747,752,757,762,767,772,777,782,787,792,797,802,807,812,817,822,827,832,837,842,847,855],{"type":18,"tag":19,"props":20,"children":21},"element","h1",{"id":8},[22],{"type":23,"value":8},"text",{"type":18,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"A2-NET: Learning Attribute-Aware Hash Codes for Large-Scale Fine-Grained Image Retrieval (NeurIPS 2021)",{"type":18,"tag":25,"props":31,"children":32},{},[33],{"type":18,"tag":34,"props":35,"children":38},"img",{"alt":36,"src":37},"1.png","https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201144615.92912056214652269012796449394024:50531206012831:2400:0E029C022556A5AC05BC8A6851A29A56FD4B65763A225D3141CC92567AD2C71D.png",[],{"type":18,"tag":25,"props":40,"children":41},{},[42],{"type":23,"value":43},"**工作内容：**通过MindSpore框架完成细粒度哈希检索的任务。具体而言，在大规模图像检索要求下，如果通过实值检索，计算复杂度较高，而如果使用简短的二值哈希编码替代原有的实值编码进行图像检索，检索效率就可以大大提升。而现有的哈希方法中，其哈希值没有实际意义，因此希望通过属性提取的方式使得生成的哈希编码具有实际的特征含义。",{"type":18,"tag":25,"props":45,"children":46},{},[47,53],{"type":18,"tag":48,"props":49,"children":50},"strong",{},[51],{"type":23,"value":52},"文章摘要",{"type":23,"value":54},"：我们的工作重点是处理大规模的细粒度图像检索，根据查询中的细粒度细节对描述兴趣概念(即相同的子类别标签)的图像进行排名。对于这样的实际任务，我们希望能够减轻细粒度特性(小的类间变化和大的类内变化)以及细粒度数据的爆炸性增长所带来的挑战。在本文中，我们提出了一个属性感知哈希网络(A2-NET)来生成属性感知哈希码，不仅使检索过程更加高效，而且建立哈希码与可视化属性之间的显式对应关系。具体地说，基于注意捕获的视觉表示，我们开发了一个重建任务的编码器-解码器结构网络，可以无监督地从外观特定的视觉表示中提取高级属性特定的向量，而不需要属性注释。网络还在这些属性向量上设置了特征去相关约束，以增强它们的表示能力。最后，由保留原始相似性的属性向量生成所需的哈希码。在五个基准细粒度数据集上的定性实验表明，我们的方法优于其他方法。更重要的是，定量结果表明，所获得的哈希码能够较强地对应细粒度对象的某些关键属性。",{"type":18,"tag":25,"props":56,"children":57},{},[58],{"type":18,"tag":48,"props":59,"children":60},{},[61],{"type":23,"value":62},"研究背景：",{"type":18,"tag":25,"props":64,"children":65},{},[66],{"type":23,"value":67},"细粒度图像检索作为细粒度图像分析的重要组成部分，近年来得到了越来越多的关注。细粒度图像识别是计算机视觉和模式识别领域的基础研究课题，旨在研究对某一传统语义类别下细粒度级别的不同子类类别进行视觉识别任务，如不同子类的狗、不同子类的鸟、不同车型的汽车等……细粒度图像识别被计算机视觉国际权威学者、ICCV Helmholtz奖及Marr奖获得Serge Belongie教授称为“视觉感知嵌入的基石性工作”。由于细粒度图像中的物体对象在类间差异中只有细微的视觉差异，缺又在姿态、规模等类内差异上有较大的变化，因此有较大的检索难度。",{"type":18,"tag":25,"props":69,"children":70},{},[71],{"type":23,"value":72},"哈希学习是通过机器学习的方法，将数据映射成二进制串的形式，能显著减少数据的存储和通信开销，从而有效提高学习系统的效率。哈希学习的目的是学到数据的二进制哈希码表示，使得哈希码尽可能地保留原空间中的近邻关系，即保相似性。具体来说，每个数据点会被一个紧凑的二进制串编码，在原空间中相似的两个点应当被映射到哈希码空间中相似的两个点。哈希方法大致分为两类，即数据无关方法和数据依赖方法。在数据无关的哈希方法中，模型中的哈希函数通常随机生成，且独立于任何训练数据，但检索性能的提高需要用哈希码的长度换取。数据依赖的哈希方法试图从一些训练数据中学习哈希函数，称为学习哈希算法。与数据无关的方法相比，学习哈希算法可以用更短的哈希码实现更高的准确性。因此，在实际应用中学习哈希算法比数据无关方法更流行。随着深度学习的兴起，一些学习哈希方法将深度特征学习集成到哈希框架中，获得了很好的性能。在以往的工作中，针对大规模图像检索，已经提出了许多深度哈希方法。与深度无监督哈希方法相比，深度监督哈希方法能够充分挖掘语义信息，获得更高的检索精度。",{"type":18,"tag":25,"props":74,"children":75},{},[76],{"type":18,"tag":48,"props":77,"children":78},{},[79],{"type":23,"value":80},"方法流程概述：",{"type":18,"tag":25,"props":82,"children":83},{},[84],{"type":18,"tag":48,"props":85,"children":86},{},[87],{"type":18,"tag":34,"props":88,"children":91},{"alt":89,"src":90},"2.png","https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201144748.33079513121303415513501155488574:50531206012831:2400:2A1BAF58C1A5EDF5241F89AC61372E4994219EE4AB52F817A41447B03EF84F22.png",[],{"type":18,"tag":25,"props":93,"children":94},{},[95],{"type":23,"value":96},"步骤1，通过卷积神经网络提取图像中的全局特征与局部特征信息；",{"type":18,"tag":25,"props":98,"children":99},{},[100],{"type":23,"value":101},"注意力在人类的感知中起着非常重要的作用，它让我们着重关注一样事物或是场景的显著特征，因此我们在卷积神经网络中引入注意力机制，来获取图像的全局与局部特征以更好的表达每一幅图像的显著特征。具体而言，首先需要通过卷积神经网络提取输入图像_I_i_ 的深度特征：",{"type":18,"tag":25,"props":103,"children":104},{},[105,107,115],{"type":23,"value":106},"![D}L9EGADA{G)G]_~]ZSA7O9.png](",{"type":18,"tag":108,"props":109,"children":113},"a",{"href":110,"rel":111},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201150141.34122638836982039754409940316252:50531206012831:2400:1A59EA6E99B751380981DC7F7A3482561C7F30508D606400E5482AEA4B40762B.png",[112],"nofollow",[114],{"type":23,"value":110},{"type":23,"value":116},")",{"type":18,"tag":25,"props":118,"children":119},{},[120,122,133],{"type":23,"value":121},"在公式（1）中得到的深度特征_T__i_ 的基础上，引入C个局部注意力引导模块，记为_A^{",{"type":18,"tag":123,"props":124,"children":125},"em",{},[126,131],{"type":18,"tag":123,"props":127,"children":128},{},[129],{"type":23,"value":130},"c}",{"type":23,"value":132}," ，再引入一个全局注意力引导模块，记为_A",{"type":23,"value":134},"_'_ ，图像局部特征输出为：",{"type":18,"tag":25,"props":136,"children":137},{},[138,140],{"type":23,"value":139},"![$}_F]N}0Q4J",{"type":18,"tag":108,"props":141,"children":144},{"href":142,"rel":143},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201150218.47050364410394287268169911835782:50531206012831:2400:4320C6A76A01D1FA2209310D4E6B282C95CE872B1B2A4A9FA6D5F4B7CC7A0DA7.png",[112],[145],{"type":23,"value":146},"Y3MD_R~@0$J.png",{"type":18,"tag":25,"props":148,"children":149},{},[150],{"type":23,"value":151},"图像的全局特征输出为：",{"type":18,"tag":25,"props":153,"children":154},{},[155,157],{"type":23,"value":156},"![27V8NKNJUHBM93ZL",{"type":18,"tag":108,"props":158,"children":161},{"href":159,"rel":160},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201150247.31337230761524655114308789180136:50531206012831:2400:40B46BF6C6D4A7F552C0C61E24A6CF82957D430FB540459C0ED59339C64911A9.png",[112],[162],{"type":23,"value":163},"%P{T{V.png",{"type":18,"tag":25,"props":165,"children":166},{},[167,169,174,176,181],{"type":23,"value":168},"通过对这些特征输出进行全局平均池化后得到图像的全局特征向量_x__'_ _{i}与局部深度特征向量_x___i^_",{"type":18,"tag":123,"props":170,"children":171},{},[172],{"type":23,"value":173},"c",{"type":23,"value":175}," ，依次拼接后得到图像的整体特征向量记作_x__",{"type":18,"tag":123,"props":177,"children":178},{},[179],{"type":23,"value":180},"i",{"type":23,"value":182},"。",{"type":18,"tag":25,"props":184,"children":185},{},[186],{"type":23,"value":187},"步骤2，构建哈希学习模块，将高维度的图像特征信息提取到低维度的哈希空间并构建哈希特征解码器，通过无监督的方式引导哈希学习过程中的属性特征提取方式；",{"type":18,"tag":25,"props":189,"children":190},{},[191,193,197,199,203,205,209,211,215],{"type":23,"value":192},"哈希学习模块通过一个变换矩阵_W_ 将步骤1中得到的深度特征向量_x__",{"type":18,"tag":123,"props":194,"children":195},{},[196],{"type":23,"value":180},{"type":23,"value":198}," 映射到k维的哈希空间中，记作_v__",{"type":18,"tag":123,"props":200,"children":201},{},[202],{"type":23,"value":180},{"type":23,"value":204}," 。图像的二进制哈希编码_u__",{"type":18,"tag":123,"props":206,"children":207},{},[208],{"type":23,"value":180},{"type":23,"value":210}," 由_v__",{"type":18,"tag":123,"props":212,"children":213},{},[214],{"type":23,"value":180},{"type":23,"value":216}," 通过两次激活得到：",{"type":18,"tag":25,"props":218,"children":219},{},[220],{"type":18,"tag":34,"props":221,"children":224},{"alt":222,"src":223},"CUG~SXAH9W54QVDY8SV`2@4.png","https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201150448.74489899100539395363546861889707:50531206012831:2400:03116450CF968818FEEC2E20C4A077400AE817B9BFF7A86DBD122DCC6576BC95.png",[],{"type":18,"tag":25,"props":226,"children":227},{},[228,230,239,241,245,247,256,258,263,265,275,277,282,284,289],{"type":23,"value":229},"其中_v_",{"type":18,"tag":123,"props":231,"children":232},{},[233,237],{"type":18,"tag":123,"props":234,"children":235},{},[236],{"type":23,"value":180},{"type":23,"value":238}," 是k维的近似二进制编码，它是通过变换矩阵_W",{"type":23,"value":240}," 得到的高度浓缩的图像特征表达向量，_u__",{"type":18,"tag":123,"props":242,"children":243},{},[244],{"type":23,"value":180},{"type":23,"value":246}," 则是最终得到的图像的二进制编码，即可以通过k位的比特信息表达整张图像的信息，大大压缩的检索空间。第一次激活tanh用以约束_v_",{"type":18,"tag":123,"props":248,"children":249},{},[250,254],{"type":18,"tag":123,"props":251,"children":252},{},[253],{"type":23,"value":180},{"type":23,"value":255}," 的数值空间并使梯度可进行反向传播，第二次激活则将特征向量约束为汉明编码加快图像检索速度。假设有n个查询点",{"type":23,"value":257},"{",{"type":18,"tag":48,"props":259,"children":260},{},[261],{"type":23,"value":262},"q__i_",{"type":23,"value":264},"}_{__i=1}^",{"type":18,"tag":123,"props":266,"children":267},{},[268,273],{"type":18,"tag":123,"props":269,"children":270},{},[271],{"type":23,"value":272},"n",{"type":23,"value":274}," 以及m个数据库点",{"type":23,"value":276},"{__v_",{"type":18,"tag":48,"props":278,"children":279},{},[280],{"type":23,"value":281},"j",{"type":23,"value":283},"}_{__j=1}^_",{"type":18,"tag":123,"props":285,"children":286},{},[287],{"type":23,"value":288},"m",{"type":23,"value":290}," ，那么哈希编码的损失可以记为：",{"type":18,"tag":25,"props":292,"children":293},{},[294],{"type":18,"tag":34,"props":295,"children":298},{"alt":296,"src":297},"7SMDH9$MR8C(V9}T%J{Y426.png","https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201150953.27333571297838057822013686285357:50531206012831:2400:17DDBBE4F16F0B86D24604CEC94316F0EC173B9CE6F32EDF839DEFE6179C607B.png",[],{"type":18,"tag":25,"props":300,"children":301},{},[302,304,308],{"type":23,"value":303},"特征解码器通过重构经过tanh激活后的哈希空间特征_v__",{"type":18,"tag":123,"props":305,"children":306},{},[307],{"type":23,"value":180},{"type":23,"value":309}," ，将属性特征复原并约束特征损失，记作：",{"type":18,"tag":25,"props":311,"children":312},{},[313,315,322,324],{"type":23,"value":314},"![G54",{"type":18,"tag":316,"props":317,"children":319},"code",{"className":318},[],[320],{"type":23,"value":321},"KNBQZA",{"type":23,"value":323},"27",{"type":18,"tag":108,"props":325,"children":328},{"href":326,"rel":327},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201150658.28564922621771332866954763431210:50531206012831:2400:264BAC5165E4C102B446220CF0B16884F6812C953DB8752C43475F2EE32EE8CD.png",[112],[329],{"type":23,"value":330},"R6597CYUB.png",{"type":18,"tag":25,"props":332,"children":333},{},[334,336,340,342,346,348,353,355,360,361,366],{"type":23,"value":335},"其中_u_",{"type":18,"tag":48,"props":337,"children":338},{},[339],{"type":23,"value":180},{"type":23,"value":341},",__z__",{"type":18,"tag":123,"props":343,"children":344},{},[345],{"type":23,"value":281},{"type":23,"value":347},"∈{-1, +1}^{",{"type":18,"tag":123,"props":349,"children":350},{},[351],{"type":23,"value":352},"k}",{"type":23,"value":354}," ，",{"type":18,"tag":123,"props":356,"children":357},{},[358],{"type":23,"value":359},"S",{"type":23,"value":347},{"type":18,"tag":123,"props":362,"children":363},{},[364],{"type":23,"value":365},"n×m}",{"type":23,"value":367}," 。",{"type":18,"tag":25,"props":369,"children":370},{},[371,373,378,380,385,387,392,394,399,401,406,408,413,415,437,439,443,444,449,450,455,456,461,462,467,468,472],{"type":23,"value":372},"其中，",{"type":18,"tag":123,"props":374,"children":375},{},[376],{"type":23,"value":377},"X",{"type":23,"value":379},"={",{"type":18,"tag":123,"props":381,"children":382},{},[383],{"type":23,"value":384},"x__1",{"type":23,"value":386},"_;_ ",{"type":18,"tag":123,"props":388,"children":389},{},[390],{"type":23,"value":391},"x__2",{"type":23,"value":393},"; ...; ",{"type":18,"tag":123,"props":395,"children":396},{},[397],{"type":23,"value":398},"x__n",{"type":23,"value":400},"}∈R^{",{"type":18,"tag":123,"props":402,"children":403},{},[404],{"type":23,"value":405},"d",{"type":23,"value":407},"×",{"type":18,"tag":123,"props":409,"children":410},{},[411],{"type":23,"value":412},"n}",{"type":23,"value":414}," ，d表示每一个特征向量_x__i_ 的维度；",{"type":18,"tag":123,"props":416,"children":417},{},[418,420,430,432],{"type":23,"value":419},"W^",{"type":18,"tag":123,"props":421,"children":422},{},[423,428],{"type":18,"tag":123,"props":424,"children":425},{},[426],{"type":23,"value":427},"T",{"type":23,"value":429}," 代表重构矩阵，为哈希变换矩阵_W",{"type":23,"value":431}," 的转置；",{"type":18,"tag":123,"props":433,"children":434},{},[435],{"type":23,"value":436},"V",{"type":23,"value":438},"'__=tanh(V)_ ，",{"type":18,"tag":123,"props":440,"children":441},{},[442],{"type":23,"value":436},{"type":23,"value":379},{"type":18,"tag":123,"props":445,"children":446},{},[447],{"type":23,"value":448},"v__1",{"type":23,"value":386},{"type":18,"tag":123,"props":451,"children":452},{},[453],{"type":23,"value":454},"v__2",{"type":23,"value":393},{"type":18,"tag":123,"props":457,"children":458},{},[459],{"type":23,"value":460},"v__n",{"type":23,"value":400},{"type":18,"tag":123,"props":463,"children":464},{},[465],{"type":23,"value":466},"k",{"type":23,"value":407},{"type":18,"tag":123,"props":469,"children":470},{},[471],{"type":23,"value":412},{"type":23,"value":473}," ，通过无监督的编码重构，可以引导哈希学习保留相对完整且重要的整体图像特征信息，使得每一维哈希空间中的信息特征进行重组后可以更全面的表达原图像中蕴含的信息。",{"type":18,"tag":25,"props":475,"children":476},{},[477],{"type":23,"value":478},"步骤3，增强步骤2中哈希模块学习得到的每个维度属性的鉴别能力，去除每个维度属性特征之间的冗余相关性。",{"type":18,"tag":25,"props":480,"children":481},{},[482],{"type":23,"value":483},"对步骤2中经过哈希变换矩阵并经过tanh激活得到的特征向量_v__'_ _[i}构建自正交损失，记为：",{"type":18,"tag":25,"props":485,"children":486},{},[487,489,495,497],{"type":23,"value":488},"![3U",{"type":18,"tag":490,"props":491,"children":492},"span",{},[493],{"type":23,"value":494},"@WR$~(2MHIN@$R",{"type":23,"value":496},"Q",{"type":18,"tag":108,"props":498,"children":501},{"href":499,"rel":500},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201150834.26566498255902106327523503349107:50531206012831:2400:E404BAD533446FF25DE9585770F3DD93152EE5BEE85BFAD0BB20492C699263DC.png",[112],[502],{"type":23,"value":503},"W%U.png",{"type":18,"tag":25,"props":505,"children":506},{},[507],{"type":23,"value":508},"其中_I_ 为单位矩阵，这样可以消除每个维度空间学习到的属性特征的冗余相关性，使得每个维度的属性特征都有自己独特且完整的表达含义，即每一个哈希维度都可以表示一种深度的属性特征信息。",{"type":18,"tag":25,"props":510,"children":511},{},[512],{"type":23,"value":513},"整体的约束损失可以记为：",{"type":18,"tag":25,"props":515,"children":516},{},[517,519,525],{"type":23,"value":518},"![5LXSE@EYQ6WC9X]%D]Q3HC9.png](",{"type":18,"tag":108,"props":520,"children":523},{"href":521,"rel":522},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201150903.87923041672466710755477159576243:50531206012831:2400:B7AE3C65E59ED5721B54750685131D6F84BE6AF8F597F3119F0F19E43877E0A5.png",[112],[524],{"type":23,"value":521},{"type":23,"value":116},{"type":18,"tag":25,"props":527,"children":528},{},[529],{"type":23,"value":530},"其中_α_ 与β 为引入的超参数，用于对齐量纲。",{"type":18,"tag":25,"props":532,"children":533},{},[534],{"type":23,"value":535},"输入图像的二进制哈希编码输出可以记为：",{"type":18,"tag":25,"props":537,"children":538},{},[539,541],{"type":23,"value":540},"![_Z",{"type":18,"tag":108,"props":542,"children":545},{"href":543,"rel":544},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201150931.26403370971068927662265717941111:50531206012831:2400:D5B92487AA59FB52A36DCCEF2ABCCD9B14B6C034C50FE43130CC7A817152CFE5.png",[112],[546],{"type":23,"value":547},"X6IPZNGG3B)NL7LK{C6U.png",{"type":18,"tag":25,"props":549,"children":550},{},[551],{"type":18,"tag":48,"props":552,"children":553},{},[554],{"type":23,"value":555},"MindSpore****实现",{"type":18,"tag":25,"props":557,"children":558},{},[559],{"type":23,"value":560},"框架安装：",{"type":18,"tag":25,"props":562,"children":563},{},[564],{"type":23,"value":565},"首先下载安装完整版的cuda",{"type":18,"tag":25,"props":567,"children":568},{},[569],{"type":18,"tag":108,"props":570,"children":573},{"href":571,"rel":572},"https://developer.nvidia.com/cuda-11.0-update1-download-archive?target%5C_os=Linux&target%5C_arch=x86%5C_64&target%5C_distro=Ubuntu&target%5C_version=2004&target%5C_type=runfilelocal",[112],[574],{"type":23,"value":575},"https://developer.nvidia.com/cuda-11.0-update1-download-archive?target\\_os=Linux&target\\_arch=x86\\_64&target\\_distro=Ubuntu&target\\_version=2004&target\\_type=runfilelocal",{"type":18,"tag":25,"props":577,"children":578},{},[579],{"type":23,"value":580},"运行：sudo sh cuda_11.0.3_450.51.06_linux.run",{"type":18,"tag":25,"props":582,"children":583},{},[584],{"type":18,"tag":34,"props":585,"children":587},{"alt":7,"src":586},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201145157.49564555406724785877785538416180:50531206012831:2400:9ACBB28B12C7B927012CDA8385B89EDC98269C5CF60EE75A91FE2909EF4A55F1.jpg",[],{"type":18,"tag":25,"props":589,"children":590},{},[591],{"type":23,"value":592},"然后安装完整版的cudnn：",{"type":18,"tag":25,"props":594,"children":595},{},[596,598,604],{"type":23,"value":597},"至官网 ",{"type":18,"tag":108,"props":599,"children":602},{"href":600,"rel":601},"https://developer.nvidia.com/rdp/cudnn-download",[112],[603],{"type":23,"value":600},{"type":23,"value":605}," 找到Download cuDNN v8.1.0 (January 26th, 2021), for CUDA 11.0,11.1 and 11.2",{"type":18,"tag":25,"props":607,"children":608},{},[609],{"type":18,"tag":34,"props":610,"children":613},{"alt":611,"src":612},"cke_64385.jpeg","https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201145320.18790089640456879544428755743170:50531206012831:2400:D2A4A011A26CB9F10F2A870AF96E91E7F6CAECA1778D0D4F8314B113916468A7.jpeg",[],{"type":18,"tag":25,"props":615,"children":616},{},[617],{"type":23,"value":618},"然后运行sudo dpkg -i 加上3个deb的文件名（分别运行即可）",{"type":18,"tag":25,"props":620,"children":621},{},[622],{"type":23,"value":623},"最后到MindSpore找到对应版本pip安装即可：",{"type":18,"tag":25,"props":625,"children":626},{},[627],{"type":18,"tag":34,"props":628,"children":631},{"alt":629,"src":630},"cke_71920.png","https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/b34/d51/b6b/37a742cae9b34d51b6bf7b00bff11321.20221201145348.49870735684748971849869825498724:50531206012831:2400:26DCE86A4B4792BA3B0CFAB0AA45D227406C36C6A323B21D34BCAEE5C450D62B.png",[],{"type":18,"tag":25,"props":633,"children":634},{},[635],{"type":23,"value":636},"模型主体代码MindSpore实现：",{"type":18,"tag":25,"props":638,"children":639},{},[640],{"type":23,"value":641},"class A_2_net(nn.Cell):",{"type":18,"tag":25,"props":643,"children":644},{},[645],{"type":23,"value":646},"def __init__(self, code_length=12, num_classes=200, att_size=4, feat_size=2048, pretrained=False,",{"type":18,"tag":25,"props":648,"children":649},{},[650],{"type":23,"value":651},"finetune=False):",{"type":18,"tag":25,"props":653,"children":654},{},[655],{"type":23,"value":656},"super(A_2_net, self).__init__()",{"type":18,"tag":25,"props":658,"children":659},{},[660],{"type":23,"value":661},"self.backbone = A_2_net_backbone(pretrained=pretrained)",{"type":18,"tag":25,"props":663,"children":664},{},[665],{"type":23,"value":666},"self.refine_global = A_2_net_refine(is_local=False, pretrained=pretrained)",{"type":18,"tag":25,"props":668,"children":669},{},[670],{"type":23,"value":671},"self.refine_local = A_2_net_refine(pretrained=pretrained)",{"type":18,"tag":25,"props":673,"children":674},{},[675],{"type":23,"value":676},"self.attention = A_2_net_attention(att_size)",{"type":18,"tag":25,"props":678,"children":679},{},[680],{"type":23,"value":681},"self.finetune = finetune",{"type":18,"tag":25,"props":683,"children":684},{},[685],{"type":23,"value":686},"self.hash_layer_active = nn.Tanh()",{"type":18,"tag":25,"props":688,"children":689},{},[690],{"type":23,"value":691},"self.unsqueeze = ops.ExpandDims()",{"type":18,"tag":25,"props":693,"children":694},{},[695],{"type":23,"value":696},"self.mul = ops.Mul()",{"type":18,"tag":25,"props":698,"children":699},{},[700],{"type":23,"value":701},"self.normlize = ops.L2Normalize()",{"type":18,"tag":25,"props":703,"children":704},{},[705],{"type":23,"value":706},"self.concat = ops.Concat(1)",{"type":18,"tag":25,"props":708,"children":709},{},[710],{"type":23,"value":711},"self.linear = ops.MatMul()",{"type":18,"tag":25,"props":713,"children":714},{},[715],{"type":23,"value":716},"self.W = ms.Parameter(Tensor(np.random.uniform(-1, 1, (code_length, (att_size + 1) * feat_size)), ms.float32), name=\"w\", requires_grad=True)",{"type":18,"tag":25,"props":718,"children":719},{},[720],{"type":23,"value":721},"def construct(self, x):",{"type":18,"tag":25,"props":723,"children":724},{},[725],{"type":23,"value":726},"out = self.backbone(x)",{"type":18,"tag":25,"props":728,"children":729},{},[730],{"type":23,"value":731},"batch_size, channels, h, w = out.shape",{"type":18,"tag":25,"props":733,"children":734},{},[735],{"type":23,"value":736},"global_f = self.refine_global(out)",{"type":18,"tag":25,"props":738,"children":739},{},[740],{"type":23,"value":741},"att_map = self.attention(out)",{"type":18,"tag":25,"props":743,"children":744},{},[745],{"type":23,"value":746},"att_size = att_map.shape[1]",{"type":18,"tag":25,"props":748,"children":749},{},[750],{"type":23,"value":751},"att_map_rep = self.unsqueeze(att_map, 2)",{"type":18,"tag":25,"props":753,"children":754},{},[755],{"type":23,"value":756},"att_map_rep = att_map_rep.repeat(channels, axis=2)",{"type":18,"tag":25,"props":758,"children":759},{},[760],{"type":23,"value":761},"out_rep = self.unsqueeze(out, 1)",{"type":18,"tag":25,"props":763,"children":764},{},[765],{"type":23,"value":766},"out_rep = out_rep.repeat(att_size, axis=1)",{"type":18,"tag":25,"props":768,"children":769},{},[770],{"type":23,"value":771},"out_local = self.mul(att_map_rep, out_rep).view(batch_size * att_size, channels, h, w)",{"type":18,"tag":25,"props":773,"children":774},{},[775],{"type":23,"value":776},"local_f, avg_local_f = self.refine_local(out_local)",{"type":18,"tag":25,"props":778,"children":779},{},[780],{"type":23,"value":781},"_, channels, h, w = local_f.shape",{"type":18,"tag":25,"props":783,"children":784},{},[785],{"type":23,"value":786},"local_f = local_f.view(batch_size, att_size, channels, h, w)",{"type":18,"tag":25,"props":788,"children":789},{},[790],{"type":23,"value":791},"avg_local_f = avg_local_f.view(batch_size, att_size, channels)",{"type":18,"tag":25,"props":793,"children":794},{},[795],{"type":23,"value":796},"global_f = global_f.view(batch_size, channels)",{"type":18,"tag":25,"props":798,"children":799},{},[800],{"type":23,"value":801},"avg_local_f = self.normlize(avg_local_f)",{"type":18,"tag":25,"props":803,"children":804},{},[805],{"type":23,"value":806},"global_f = self.normlize(global_f)",{"type":18,"tag":25,"props":808,"children":809},{},[810],{"type":23,"value":811},"all_f = self.concat((avg_local_f.view(batch_size, -1), global_f))",{"type":18,"tag":25,"props":813,"children":814},{},[815],{"type":23,"value":816},"deep_S = self.linear(all_f, self.W.T)",{"type":18,"tag":25,"props":818,"children":819},{},[820],{"type":23,"value":821},"binary_like_code = self.hash_layer_active(deep_S)",{"type":18,"tag":25,"props":823,"children":824},{},[825],{"type":23,"value":826},"if self.finetune:",{"type":18,"tag":25,"props":828,"children":829},{},[830],{"type":23,"value":831},"after_f = self.linear(binary_like_code, self.W)",{"type":18,"tag":25,"props":833,"children":834},{},[835],{"type":23,"value":836},"return binary_like_code, all_f, after_f",{"type":18,"tag":25,"props":838,"children":839},{},[840],{"type":23,"value":841},"else:",{"type":18,"tag":25,"props":843,"children":844},{},[845],{"type":23,"value":846},"return binary_like_code",{"type":18,"tag":25,"props":848,"children":849},{},[850],{"type":18,"tag":48,"props":851,"children":852},{},[853],{"type":23,"value":854},"总结与展望：",{"type":18,"tag":25,"props":856,"children":857},{},[858],{"type":23,"value":859},"本文提出了一种基于属性感知的哈希网络，即A2-NET，用于处理大规模的细粒度图像检索任务。A2-NET的设计预期是高效并可解释的。由于是第一次对哈希可解释方向进行探索，该文也存在一定的局限性。在未来工作中，由于视觉属性在描述已知和未知实体方面都很有用，因此我们希望进一步研究识别未观测到的子类别，即基于属性感知哈希码的零样本细粒度识别。在使用MindSpore框架进行模型训练时，需要注意网络中的Dropout层不会在验证阶段自动失效从而导致结果降低，需要预先进行设置。MindSpore框架在模型训练阶段中有更高的运行速度，可以更好的支撑大规模数据集的训练。",{"title":7,"searchDepth":861,"depth":861,"links":862},4,[],"markdown","content:technology-blogs:zh:1982.md","content","technology-blogs/zh/1982.md","technology-blogs/zh/1982","md",1776506117418]