[{"data":1,"prerenderedAt":274},["ShallowReactive",2],{"content-query-5MXwcP3v2i":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":268,"_id":269,"_source":270,"_file":271,"_stem":272,"_extension":273},"/technology-blogs/zh/2722","zh",false,"","MindSpore AI科学计算系列（36）：MPI高阶特性和科学计算案例","MPI(Message Passing Interface)即消息传递接口，是消息传递函数库的标准规范，由MPI论坛开发，是科学计算和高性能计算最常用最重要的并行分布式接口之一。","2023-08-23","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/15/3b5eb3c609884efaaca214aa0332b749.png","technology-blogs","大V博文",{"type":15,"children":16,"toc":265},"root",[17,25,35,40,45,53,58,67,72,79,84,92,97,102,109,114,119,126,131,136,143,148,156,164,169,177,182,187,195,200,205,210,224,236,241,253],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"mindspore-ai科学计算系列36mpi高阶特性和科学计算案例",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":28},"p",{},[29],{"type":18,"tag":30,"props":31,"children":32},"strong",{},[33],{"type":24,"value":34},"一．MPI基本情况",{"type":18,"tag":26,"props":36,"children":37},{},[38],{"type":24,"value":39},"MPI(Message Passing Interface)即消息传递接口，是消息传递函数库的标准规范，由MPI论坛开发，是科学计算和高性能计算最常用最重要的并行分布式接口之一。MPI属于OSI参考模型的第五层或更高，通常底层通过传输层的Sockets或TCP实现。",{"type":18,"tag":26,"props":41,"children":42},{},[43],{"type":24,"value":44},"MPI从1994年提出1.0版本，引进了基本的消息传递概念，到1997年演进到2.0版本，增加了单边通信和并行 I/O 特性，至此定义了MPI的基本接口功能集。从3.0 版本开始逐渐加入众多提升性能和扩展使用场景的接口，如 非阻塞集合通信、近邻集合通信、共享内存扩展、近邻集合通信、MPI_T接口扩充等。近年发布的4.0和4.1版本， 特性进一步增强，如支持混合编程模型的扩展、容错性、持久化集合，性能断言和提示、RMA/单边通信等，足以支撑大型分布式超级应用。",{"type":18,"tag":26,"props":46,"children":47},{},[48],{"type":18,"tag":30,"props":49,"children":50},{},[51],{"type":24,"value":52},"二．实现版本和主要特性支持",{"type":18,"tag":26,"props":54,"children":55},{},[56],{"type":24,"value":57},"MPI比较流行的实现版本为MPICH、Open MPI及MVAPICH等，此外各个厂商也会在自有芯片或系统的基础上加入特定优化而制定版本，如Intel、IBM、Microsoft等。目前较为成熟的MPI 3.1有较多厂商的支持和实现，而MPI 4.0目前支持较好的只有MPICH、Open MPI，但其主要功能也都有覆盖。",{"type":18,"tag":26,"props":59,"children":60},{},[61],{"type":18,"tag":62,"props":63,"children":66},"img",{"alt":64,"src":65},"image.png","https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/e64/154/b38/90a1d5d431e64154b387b3660e356ff5.20230830064425.61624259774266170888103211805866:50540829072914:2400:3B25E7BD19139D85AAFC865CAC4AB9A6ABE5D26EF92CC6748D4C7ED8767ED9D9.png",[],{"type":18,"tag":26,"props":68,"children":69},{},[70],{"type":24,"value":71},"图1 MPI 3.1实现软件概况",{"type":18,"tag":26,"props":73,"children":74},{},[75],{"type":18,"tag":62,"props":76,"children":78},{"alt":64,"src":77},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/e64/154/b38/90a1d5d431e64154b387b3660e356ff5.20230830064442.89585333681533497368573645762409:50540829072914:2400:1C491CF1DD316D9184D9B97DD61A0B0F2D95D8D09DBA6D9F6E09D67DE304D74D.png",[],{"type":18,"tag":26,"props":80,"children":81},{},[82],{"type":24,"value":83},"图2 MPI 4.0实现软件概况",{"type":18,"tag":26,"props":85,"children":86},{},[87],{"type":18,"tag":30,"props":88,"children":89},{},[90],{"type":24,"value":91},"三．MPI 高阶特性",{"type":18,"tag":26,"props":93,"children":94},{},[95],{"type":24,"value":96},"(1)**共享内存。**MPI 1和MPI 2 不支持直接共享内存，只能通过消息传递的方式读取或更新，但本质上进程之间内存空间不共享，不能以常规方式加载和存储。这些进程间的显式消息传递和远端内存访问操作都需要额外的内存复制，这降低了内存性能并增加了内存消耗。MPI 3开始将共享内存窗口的部分内存空间暴露给其他进程，这种可移植的内存共享机制允许在统一的编程模式下进行常规的MPI操作和共享内存操作，避免使用了外部共享内存编程模型带来的问题。",{"type":18,"tag":26,"props":98,"children":99},{},[100],{"type":24,"value":101},"(2)**混合编程。**为了应对大量CPU、CPU核、GPU等加速硬件的混合架构， MPI从3.0开始新增混合编程特性，使其可以更好地处理节点和数据并行编程模型。比较常见的为MPI + OpenMP，可以进行节点级和线程级的混合并行，此外和pthreads混合也是一种常见方式。MPI还提供了一定级别的线程安全编程接口MPI_Thread_init，分别支持MPI_THREAD_SINGLE(单线程场景)、MPI_THREAD_FUNNELLED(循环场景)和MPI_THREAD_MULTIPLE(完全多线程场景) 。",{"type":18,"tag":26,"props":103,"children":104},{},[105],{"type":18,"tag":62,"props":106,"children":108},{"alt":64,"src":107},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/e64/154/b38/90a1d5d431e64154b387b3660e356ff5.20230830064503.99042126627652851227017037832431:50540829072914:2400:4B4374D96DB51BC66E2ED9FAF642214F3F36B15B20B50E5A66EA30691613AECF.png",[],{"type":18,"tag":26,"props":110,"children":111},{},[112],{"type":24,"value":113},"图3 MPI + OpenMP混合编程模型示意图",{"type":18,"tag":26,"props":115,"children":116},{},[117],{"type":24,"value":118},"(3)**容错性。**MPI 4提供的关键特性，主要提供可移植的应用容错和快速恢复机制。在每个迭代，MPI和应用的状态数据均会被记录Checkpoint，当出现失败时支持从最近的Checkpoint恢复",{"type":18,"tag":26,"props":120,"children":121},{},[122],{"type":18,"tag":62,"props":123,"children":125},{"alt":64,"src":124},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/e64/154/b38/90a1d5d431e64154b387b3660e356ff5.20230830064521.61469264154905717749726968154791:50540829072914:2400:6051818EA82E1C152F9D7460EB0B1A07CFA1377BE276BE1041CF87667C73FB6A.png",[],{"type":18,"tag":26,"props":127,"children":128},{},[129],{"type":24,"value":130},"图4 MPI 4 容错机制示意图",{"type":18,"tag":26,"props":132,"children":133},{},[134],{"type":24,"value":135},"(4) **RMA/单边通信。**单边通信是指将数据交换和同步进行解耦，交换数据时不需要远端进程进行同步。这使得一些不规则的通信模式(Communication Pattern)更容易实现，不需要额外的步骤来确定需要进行多少次 Send/Recv。如果系统硬件支持远端内存访问(Remote Memory Access, RMA)，则可以获得比 Send/Recv更好的性能。单边通信特性从MPI 3开始提供，并在MPI 4进一步优化增强。",{"type":18,"tag":26,"props":137,"children":138},{},[139],{"type":18,"tag":62,"props":140,"children":142},{"alt":64,"src":141},"https://fileserver.developer.huaweicloud.com/FileServer/getFile/cmtybbs/e64/154/b38/90a1d5d431e64154b387b3660e356ff5.20230830064537.77962643486378517449376954429213:50540829072914:2400:7534FB6D16C23EC60FC9E94B9C0CDB9F13E9B8693D56343F4580E3A81B319D61.png",[],{"type":18,"tag":26,"props":144,"children":145},{},[146],{"type":24,"value":147},"图5 双边通信VS单边通信示意图",{"type":18,"tag":26,"props":149,"children":150},{},[151],{"type":18,"tag":30,"props":152,"children":153},{},[154],{"type":24,"value":155},"四．MPI在科学计算的案例",{"type":18,"tag":26,"props":157,"children":158},{},[159],{"type":18,"tag":30,"props":160,"children":161},{},[162],{"type":24,"value":163},"(1)气象模拟",{"type":18,"tag":26,"props":165,"children":166},{},[167],{"type":24,"value":168},"在天气模拟中，大气层可以看作一个三维球面，模拟时需要计算全球各个网格点温度、湿度、风速、降水、气压等，使用MPI可以将全球的网格点划分成若干个子区域，然后将子区域划分到不同的计算节点上，进行并行计算和模拟。由于各个网格在模拟过程中，边界互相影响，所以需要将边界一定范围内的数据进行信息交换，在每个模拟时间步上。通过利用MPI进行并行计算和数据通信，可以实现天气模拟的加速，预报的准确性和效率。但根据实践，使用MPI的基本操作并不足以使模拟系统达到高性能，如通信的正交性和适当的缓存使用之间往往会产生不平衡，需要研究者自行设计进程分配策略。针对模拟中边界数据量大的情形，则需要使用单边异步和RMA共享内存等来进行计算和通信的互相掩盖。",{"type":18,"tag":26,"props":170,"children":171},{},[172],{"type":18,"tag":30,"props":173,"children":174},{},[175],{"type":24,"value":176},"(２)分子动力学模拟",{"type":18,"tag":26,"props":178,"children":179},{},[180],{"type":24,"value":181},"分子动力学通过模拟分子体系的运动，不仅可以得到原子的运动轨迹，观察原子运动过程中各种微观细节，还可以由体系的不同构成状态中抽取样本进一步计算热力学性质等其它宏观性质。分子动力学模拟的准确度取决于原子在运动过程中受到环境以及自身体系其它原子的相互作用，这种相互可以用量子力学的理论描述，也可以采用经验性的方法计算，还可以使用近年流行的AI4SCI方法使用神经网络进行推理计算，根据相互作用的近似程度不同，研究范围从几个原子到几千几万个原子甚至上亿个原子不等。",{"type":18,"tag":26,"props":183,"children":184},{},[185],{"type":24,"value":186},"对于大体系分子模拟，往往根据几何空间将其划分成若干个网格区域，并假设受力仅受在当前网格区域和周围邻域影响，这样便可将庞大的体系按照网格域将计算分解到不同的计算节点。在每个时间步骤或每几个时间步骤，由于原子运动等，导致网格区域内所属原子发生变化及邻域发生变动，需要使用MPI通信交换数据，由于邻域会被周围的多个区域共享或被不规则划分共享，导致数据同步需要MPI的机制精确地进行，防止脏读脏写等。",{"type":18,"tag":26,"props":188,"children":189},{},[190],{"type":18,"tag":30,"props":191,"children":192},{},[193],{"type":24,"value":194},"(3)流体力学仿真",{"type":18,"tag":26,"props":196,"children":197},{},[198],{"type":24,"value":199},"流体力学仿真原理是基于一系列的控制方程，由于N-S等方程组的求解极为困难，因此在实际应用中常常采用有限体积或有限元的数值解法来求解。具体就是通过将流体分割成很多很小的体积单元，即网格，然后对每一个网格进行力学特性的计算和模拟。",{"type":18,"tag":26,"props":201,"children":202},{},[203],{"type":24,"value":204},"流体力学仿真种常用到的几种并行方式包括数据并行，任务并行和混合并行。数据并行如OpenMP，计算域不分区但内存共享，各线程计算同一个分区上的不同数据，该方式实现简单，但可扩展度小。任务并行如MPI，计算域分区但内存不共享，各进程独立计算对应分区，分区间进行MPI通信，该方式适合大规模扩展但实现较为复杂，对软件架构要求也较高。混合并行则是MPI + OpenMP的方式，节点内用OpenMP跨节点则选用MPI，在保证通信效率的同时也适合大规模扩展，该方式还可进一步分为粗粒度混合并行和细粒度混合并行，具体可参考国产CFD软件风雷相关文档。",{"type":18,"tag":26,"props":206,"children":207},{},[208],{"type":24,"value":209},"参考文献",{"type":18,"tag":26,"props":211,"children":212},{},[213,215],{"type":24,"value":214},"[1] ",{"type":18,"tag":216,"props":217,"children":221},"a",{"href":218,"rel":219},"https://link.zhihu.com/?target=https%3A//www.mpi-forum.org/",[220],"nofollow",[222],{"type":24,"value":223},"https://www.mpi-forum.org",{"type":18,"tag":26,"props":225,"children":226},{},[227,229],{"type":24,"value":228},"[2] ",{"type":18,"tag":216,"props":230,"children":233},{"href":231,"rel":232},"https://link.zhihu.com/?target=https%3A//github.com/mpiwg-ft/ft-issues/blob/master/slides/MPI_Stages.pdf",[220],[234],{"type":24,"value":235},"https://github.com/mpiwg-ft/ft-issues/blob/master/slides/MPI_Stages.pdf",{"type":18,"tag":26,"props":237,"children":238},{},[239],{"type":24,"value":240},"[3] HE Qiang, LI Yongjian, HUANG Weifeng, LI Decai, HU Yang, WANG Yuming. Parallel simulations of large-scale particle-fluid two-phase flows with the lattice Boltzmann method based on an MPI+OpenMP mixed programming model. Journal of Tsinghua University (Science and Technology), 2019, 59(10): 847-853.",{"type":18,"tag":26,"props":242,"children":243},{},[244,246],{"type":24,"value":245},"[4] ",{"type":18,"tag":216,"props":247,"children":250},{"href":248,"rel":249},"https://link.zhihu.com/?target=https%3A//spcl.inf.ethz.ch/Publications/.pdf/hoefler-hpcac19-fompi-spin.pdf",[220],[251],{"type":24,"value":252},"https://spcl.inf.ethz.ch/Publications/.pdf/hoefler-hpcac19-fompi-spin.pdf",{"type":18,"tag":26,"props":254,"children":255},{},[256,258],{"type":24,"value":257},"[5] ",{"type":18,"tag":216,"props":259,"children":262},{"href":260,"rel":261},"https://link.zhihu.com/?target=http%3A//www.cardc.cn/nnw/software/phenglei%23intro",[220],[263],{"type":24,"value":264},"http://www.cardc.cn/nnw/software/p",{"title":7,"searchDepth":266,"depth":266,"links":267},4,[],"markdown","content:technology-blogs:zh:2722.md","content","technology-blogs/zh/2722.md","technology-blogs/zh/2722","md",1776506122998]