[{"data":1,"prerenderedAt":250},["ShallowReactive",2],{"content-query-ckByFSU7jH":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":244,"_id":245,"_source":246,"_file":247,"_stem":248,"_extension":249},"/technology-blogs/zh/2723","zh",false,"","MindSpore AI科学计算系列（37）：Learning Mesh-Based Simulation论文分析","基于网格的模拟是复杂物理系统建模的核心，由于高维科学模拟的运行成本非常昂贵，网格的分辨率和强大的数值积分方法需要在准确性和效率之间取得平衡，使用AI通过几何外形快速得到流场信息变得尤为重要。","2023-08-29","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/15/ab39fa3da2954a7482042bdeef857f52.png","technology-blogs","大V博文",{"type":15,"children":16,"toc":232},"root",[17,25,32,38,44,53,58,63,69,74,79,84,91,96,102,107,118,123,128,134,139,144,149,155,160,165,172,177,190,196,201,206,220],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"mindspore-ai科学计算系列37learning-mesh-based-simulation论文分析",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":29},"h2",{"id":28},"一-背景",[30],{"type":24,"value":31},"一． 背景",{"type":18,"tag":33,"props":34,"children":35},"p",{},[36],{"type":24,"value":37},"基于网格的模拟是复杂物理系统建模的核心，由于高维科学模拟的运行成本非常昂贵，网格的分辨率和强大的数值积分方法需要在准确性和效率之间取得平衡，使用AI通过几何外形快速得到流场信息变得尤为重要。在当前的高性能计算中，分为结构网格和非结构网格两类，为复杂的场景进行计算，而对于实际的工程场景，随着外形的复杂化，非结构网格逐步成为数值计算中主流的网格采用方式。由于非结构网格在其网格区域内的内部点不具有相同的毗邻单元，可以是多种形状，因而无法转换为规整的Tensor表示，从而无法使用CNN将几何和流场进行映射，需要采用GNN来对这类非结构化的数据进行处理。DeepMind提出了MeshGraphNets[1]来解决非结构网格的流场快速预测问题。",{"type":18,"tag":26,"props":39,"children":41},{"id":40},"二-网络架构",[42],{"type":24,"value":43},"二． 网络架构",{"type":18,"tag":33,"props":45,"children":46},{},[47],{"type":18,"tag":48,"props":49,"children":52},"img",{"alt":50,"src":51},"image.png","https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/e64/154/b38/90a1d5d431e64154b387b3660e356ff5.20230830064647.10547533717829171858840200918098:50540829073314:2400:ECFC012CD93211B86BCA7CAE5CB501EDA15BD9544086BFC2AA6DD7C071A4C8FB.png",[],{"type":18,"tag":33,"props":54,"children":55},{},[56],{"type":24,"value":57},"图1. MESHGRPHNETS网络架构",{"type":18,"tag":33,"props":59,"children":60},{},[61],{"type":24,"value":62},"模型采用了Encoder-Processor-Decoder的架构进行训练，并可在推理时反复迭代生成长时间的轨迹。Encoder输入网格M(t) (M(t) = (V, E^M)),表示的是在时刻t的仿真网格有V个节点E^M条边，其转换为图，并增加了额外的World-space边。Processor通过几轮网格边和外部边之间的消息传递，更新了所有节点和边的embedding信息。Decoder提取每个节点的特征，并将其更新到网格产生下一时刻的M(t+1)。",{"type":18,"tag":26,"props":64,"children":66},{"id":65},"_1-encoder",[67],{"type":24,"value":68},"1. Encoder",{"type":18,"tag":33,"props":70,"children":71},{},[72],{"type":24,"value":73},"Encoder模块对当前的网格M(t)进行编码到一个多图G = (V, E^M, E^M)。其中网格点为V，网格点之间的连线为边E^M，这两个可以用来计算网格内部的动量；而对于拉格朗日系统，增加了world edge E^M，可以学习外部的动量，如（自）碰撞和接触。world edge E^M是通过空间邻近性创建的：给定网格边的最小半径为r^W，为节点i和节点j设定一个外部边，这个边符合这个不等式|x(i)-x(j)| \u003C r^W，并排除在网格中已连接的边。",{"type":18,"tag":33,"props":75,"children":76},{},[77],{"type":24,"value":78},"然后，我们将特征编码为图的节点和边。为了实现空间等变，位置特征作为相对边缘特征提供。我们把网格空间内的相对位置矢量u(i)(j) = u(i)-u(j) 和其范数 |u(i)(j)|编码到网格边e(i)(j)^M∈E^M；然后将world-space的相对位置矢量x(i)(j)和其范数|x(i)(j)|编码到e(i)(j)^M∈E^M和 e(i)(j)^W∈E^W；其余动量特征q(i)都被编码到了节点特征v(i)。",{"type":18,"tag":33,"props":80,"children":81},{},[82],{"type":24,"value":83},"最终，上述特征被编码到一个长度为128的latent 向量，网络结构是MLP。",{"type":18,"tag":33,"props":85,"children":86},{},[87],{"type":18,"tag":48,"props":88,"children":90},{"alt":50,"src":89},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/e64/154/b38/90a1d5d431e64154b387b3660e356ff5.20230830064711.09132942756443938127824429185509:50540829073314:2400:D4B2CB373DF031FD09D8643F4700E7BBD0BADB7669366FAF6CE0AC17ED958FE9.png",[],{"type":18,"tag":33,"props":92,"children":93},{},[94],{"type":24,"value":95},"图2 网格空间和world space示意",{"type":18,"tag":26,"props":97,"children":99},{"id":98},"_2-processor",[100],{"type":24,"value":101},"2. Processor",{"type":18,"tag":33,"props":103,"children":104},{},[105],{"type":24,"value":106},"Processor由L个相同的消息传递模块组成，它概括了图——网络块到多边缘集合。每个消息传递模块都包含了独立的网络参数，将e(i)(j)^M、e(i)(j)^W和v(i)更新为新的量，可以用如下式子表示：",{"type":18,"tag":33,"props":108,"children":109},{},[110],{"type":18,"tag":111,"props":112,"children":113},"strong",{},[114],{"type":18,"tag":48,"props":115,"children":117},{"alt":50,"src":116},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/e64/154/b38/90a1d5d431e64154b387b3660e356ff5.20230830064734.25566545410159946860687068852989:50540829073314:2400:07C12C21A2F47D05E1DF8523121941810B772333C3B1F216A6BEDE915F2CFFCC.png",[],{"type":18,"tag":33,"props":119,"children":120},{},[121],{"type":24,"value":122},"式1",{"type":18,"tag":33,"props":124,"children":125},{},[126],{"type":24,"value":127},"其中f^M、f^W、f^V使用了带残差连接的MLP网络。",{"type":18,"tag":26,"props":129,"children":131},{"id":130},"_3-decoder-and-state-updater",[132],{"type":24,"value":133},"3. Decoder and state updater",{"type":18,"tag":33,"props":135,"children":136},{},[137],{"type":24,"value":138},"为了从t时刻的输入预测t+1时刻的状态，decoder使用了一个MLP将最终处理后的latent node特征v(i)转换为一个或多个输出特征p(i)。",{"type":18,"tag":33,"props":140,"children":141},{},[142],{"type":24,"value":143},"我们可以把输出特征p(i)理解为q(i)的高阶导数，并把他们整合，通过一个△t=1前向欧拉积分来计算下一步的动量q(i)^(t+1)。对于一阶系统，输出p(i)积分一次来更新q(i)^(t+1) = p(i) + q(i)^t，而对于二阶系统会进行两次积分：q(i)^(t+1) = p(i) + 2q(i)^t – q^(t-1)。附加的输出特征p(i)仍被用于预测压力和应力等辅助量。最终，输出网格节点V通过q(i)^(t+1)进行更新从而产生M^(t+1)。",{"type":18,"tag":33,"props":145,"children":146},{},[147],{"type":24,"value":148},"我们通过监督每个节点的输出特征p(i)来训练我们的动态模型，解码器对p(i)和ground truth p(i)`计算L2 loss。",{"type":18,"tag":26,"props":150,"children":152},{"id":151},"三-实验结果",[153],{"type":24,"value":154},"三． 实验结果",{"type":18,"tag":33,"props":156,"children":157},{},[158],{"type":24,"value":159},"论文在具有不同底层PDE的系统上评估了此模型的效果，包含布料、结构力学、不可压缩流体和可压缩流体。数据集AIRFOIL的边长范围在2*10^-4m到3.5m，同时也模拟了在轨迹过程中动态改变分辨率的网格。",{"type":18,"tag":33,"props":161,"children":162},{},[163],{"type":24,"value":164},"表1 在不同数据集下使用MeshGraphNets与传统方法的性能和精度对比",{"type":18,"tag":33,"props":166,"children":167},{},[168],{"type":18,"tag":48,"props":169,"children":171},{"alt":50,"src":170},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/e64/154/b38/90a1d5d431e64154b387b3660e356ff5.20230830064755.26277722383114546843921729176861:50540829073314:2400:8589EBC5084BDB067958B03C26C966025D0050F2A70BD686EFC1A3342838F069.png",[],{"type":18,"tag":33,"props":173,"children":174},{},[175],{"type":24,"value":176},"上表为在不同的数据集下使用MeshGraphNets和使用传统方式的性能和精度对比。可以看到，对于不同的数据集，性能均有几百倍的提升，但随着仿真步数增多，产生了误差累积和结果漂移，精度也随之下降，这个问题在DeepMind 2022年ICLR的论文《Predicting Physics in Mesh-reduced Space with Temporal Attention》[2]进行了解决，具体方案将在后续的博客中进行分析。",{"type":18,"tag":33,"props":178,"children":179},{},[180,182,188],{"type":24,"value":181},"MeshGraphNets在训练分布之外的泛化性也表现得很好，包括底层系统参数、网格形状和网格大小，这是因为在图上使用相对编码的架构已被证明非常有利于泛化。在翼型数据集，验证了模型在更陡的角度（-35°——35° VS 训练集中的-25°——25°）和更高的马赫数（0.7",{"type":18,"tag":183,"props":184,"children":185},"del",{},[186],{"type":24,"value":187},"0.9 VS训练集中的0.2",{"type":24,"value":189},"0.7）下，预测结果仍然是合理的且RMSE仅仅从11.5（训练）升到了12.4（更陡的角度）和13.1（更高的马赫数）。同样，在FLAGDYNAMIC变体数据集上训练了一个模型，风速和风向在轨迹之间进行变化，但在每个轨迹内保持不变；在推理时，可以自由的改变风速和风向。这表明这个模型学习的物理定律可以推断出未经训练的参数范围。",{"type":18,"tag":26,"props":191,"children":193},{"id":192},"四-总结",[194],{"type":24,"value":195},"四． 总结",{"type":18,"tag":33,"props":197,"children":198},{},[199],{"type":24,"value":200},"MeshGraphNets是一种通用的基于网格的方法，可以准确有效地对广泛的物理系统进行建模，具有良好的泛化能力，并且可以在推理时进行扩展。这种方法相对传统的求解器可实现更高效的模拟，并且由于它是可微分的，对于设计优化或最优控制任务很有用。MeshGraphNets的出现将促进研究学者们对于结构网格和非结构网格的流场快速预测提出更优和更新的网络架构，为进一步降低累积误差、提升泛化性奠定了良好的基础。",{"type":18,"tag":26,"props":202,"children":204},{"id":203},"参考文献",[205],{"type":24,"value":203},{"type":18,"tag":33,"props":207,"children":208},{},[209,211],{"type":24,"value":210},"[1] Pfaff T, Fortunato M, Sanchez-Gonzalez A, et al. Learning mesh-based simulation with graph networks[J]. arXiv preprint arXiv:2010.03409, 2020. ",{"type":18,"tag":212,"props":213,"children":217},"a",{"href":214,"rel":215},"https://link.zhihu.com/?target=https%3A//arxiv.org/abs/2010.03409",[216],"nofollow",[218],{"type":24,"value":219},"https://arxiv.org/abs/2010.03409",{"type":18,"tag":33,"props":221,"children":222},{},[223,225],{"type":24,"value":224},"[2] Han X, Gao H, Pfaff T, et al. Predicting physics in mesh-reduced space with temporal attention[J]. arXiv preprint arXiv:2201.09113, 2022. ",{"type":18,"tag":212,"props":226,"children":229},{"href":227,"rel":228},"https://link.zhihu.com/?target=https%3A//arxiv.org/abs/2201.09113",[216],[230],{"type":24,"value":231},"https://arxiv.org/abs/2201.0911",{"title":7,"searchDepth":233,"depth":233,"links":234},4,[235,237,238,239,240,241,242,243],{"id":28,"depth":236,"text":31},2,{"id":40,"depth":236,"text":43},{"id":65,"depth":236,"text":68},{"id":98,"depth":236,"text":101},{"id":130,"depth":236,"text":133},{"id":151,"depth":236,"text":154},{"id":192,"depth":236,"text":195},{"id":203,"depth":236,"text":203},"markdown","content:technology-blogs:zh:2723.md","content","technology-blogs/zh/2723.md","technology-blogs/zh/2723","md",1776506123052]