[{"data":1,"prerenderedAt":381},["ShallowReactive",2],{"content-query-aXKak3HiwH":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":375,"_id":376,"_source":377,"_file":378,"_stem":379,"_extension":380},"/news/zh/451","zh",false,"","MindSpore开源框架加持，如何「炼出」首个千亿参数、TB级内存的中文预训练语言模型？","千亿参数量的中文大规模预训练语言模型时代到来。","2021-04-27","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/15d5a3ab15a141c8a7cd020aa6e95136.png","news",{"type":14,"children":15,"toc":372},"root",[16,24,30,35,39,44,49,54,59,64,69,74,79,84,89,94,99,104,109,114,119,124,129,134,139,147,152,157,162,167,172,177,182,187,194,199,204,211,216,221,228,233,238,245,250,255,262,267,274,283,288,293,298,305,310,317,322,330,335,340,345,350,355],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"mindspore开源框架加持如何炼出首个千亿参数tb级内存的中文预训练语言模型",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"机器之心报道",{"type":17,"tag":25,"props":31,"children":32},{},[33],{"type":23,"value":34},"作者：思",{"type":17,"tag":25,"props":36,"children":37},{},[38],{"type":23,"value":9},{"type":17,"tag":25,"props":40,"children":41},{},[42],{"type":23,"value":43},"近段时间，中文大规模预训练语言模型圈有些热闹。26 亿参数量的「悟道 · 文源」， 270 亿参数量的 PLUG，以及昨天华为云发布的千亿级别「盘古」NLP 大模型，预训练语言模型已经成长到仅加载就需要 TB 级的内存或显存。",{"type":17,"tag":25,"props":45,"children":46},{},[47],{"type":23,"value":48},"我们可以直观地想到，「盘古」效果理应更好，但计算量需求也更大，训练起来更困难。",{"type":17,"tag":25,"props":50,"children":51},{},[52],{"type":23,"value":53},"然而「盘古」实际上是这样一次探索：开源框架 MindSpore，昇腾基础软硬件平台，加上超大规模中文预训练模型，意味着基础设施已然完善了。",{"type":17,"tag":25,"props":55,"children":56},{},[57],{"type":23,"value":58},"这项工作由华为以及北京大学相关技术团队联手完成，在昇腾基础软硬件平台，以及 MindSpore 框架自动并行等黑科技的帮助下，训练出当前最大的中文预训练模型。",{"type":17,"tag":25,"props":60,"children":61},{},[62],{"type":23,"value":63},"那么量级不断拔高的盘古大模型是如何训练出来的？",{"type":17,"tag":25,"props":65,"children":66},{},[67],{"type":23,"value":68},"接下来，让我们细致解读下「盘古」背后的关键技术。",{"type":17,"tag":25,"props":70,"children":71},{},[72],{"type":23,"value":73},"千亿参数，TB 级内存的模型",{"type":17,"tag":25,"props":75,"children":76},{},[77],{"type":23,"value":78},"以盘古 2000 亿为例，如果我们训练时权重都用标准的 FP32 数据格式，那么算下来，权重占的空间就达到了 750GB，训练过程中内存开销还会数倍上升。这 750GB 参数，不是放在硬盘上，也不是加载到内存中，而是需要移到昇腾Atlas训练服务器 HBM（High Bandwidth Memory 高带宽存储器）内存中，以利用昇腾Atlas训练服务器训练模型。",{"type":17,"tag":25,"props":80,"children":81},{},[82],{"type":23,"value":83},"模型大 ，意味着数据也大，而且都需要是高质量数据。为了满足数据需求，研发团队从互联网爬取了 80 TB 文本，并最后清洗为 1TB 的中文数据集。",{"type":17,"tag":25,"props":85,"children":86},{},[87],{"type":23,"value":88},"这样的模型与数据，已经不是我们几台服务器能加载上的了，更不用说进行训练。好在研发团队会提供 API，一般算法工程师直接调用接口就能试试效果。",{"type":17,"tag":25,"props":90,"children":91},{},[92],{"type":23,"value":93},"可以说，目前盘古是业界首创的千亿规模中文预训练模型，其中最高参数量达 2000 亿。",{"type":17,"tag":25,"props":95,"children":96},{},[97],{"type":23,"value":98},"超大规模自动并行，算法工程师的福音",{"type":17,"tag":25,"props":100,"children":101},{},[102],{"type":23,"value":103},"先考虑一个问题，你想到如何训练这样的大模型了吗？",{"type":17,"tag":25,"props":105,"children":106},{},[107],{"type":23,"value":108},"如果给你足够的计算力，你能想到如何训练这么大的模型吗？我们最常用的分布式训练方式数据并行，单独这么做肯定是不行的，因为没有哪个计算硬件能放下 800GB 的参数。那么再加上模型并行呢？又产生了新问题，我们该如何拆分如此巨大的「盘古」？硬件产品（如 NPU、GPU 等）之间的梯度流、数据流通信又是什么样的？",{"type":17,"tag":25,"props":110,"children":111},{},[112],{"type":23,"value":113},"显然训练如此庞大的模型，远比我们想象中的复杂，需要大量的工程化操作，并保证这些操作不会或极少影响到模型最终收敛效果。",{"type":17,"tag":25,"props":115,"children":116},{},[117],{"type":23,"value":118},"难道盘古真得靠手动并行优化？",{"type":17,"tag":25,"props":120,"children":121},{},[122],{"type":23,"value":123},"如果手动来写分布式训练逻辑，那么需要综合考虑计算量与类型、集群带宽、拓扑结构、样本数量等等一大堆复杂的东西，然后再设计出性能比较优秀的并行切分策略，并编写大量并行切分和节点间的通信代码。如果系统环境变了，还要重新设计并修改算法，想想就觉得头大。",{"type":17,"tag":25,"props":125,"children":126},{},[127],{"type":23,"value":128},"倘若我们用 TensorFlow 或其他类似框架，MirroredStrategy 这一系列自带的分布式策略完全用不上，看起来自行写并行策略是必不可少的。然而，盘古 真正的训练是一种软硬件协同的方式，MindSpore 计算框架、CANN 异构计算架构、昇腾基础软硬件平台整套基础设施。其中，MindSpore 提供的，就包含了至关重要的自动并行能力。",{"type":17,"tag":25,"props":130,"children":131},{},[132],{"type":23,"value":133},"融合 5 大维度，强大的自动并行",{"type":17,"tag":25,"props":135,"children":136},{},[137],{"type":23,"value":138},"MindSpore 自动并行提供了 5 维的并行方式：数据并行、算子级模型并行、Pipeline 模型并行、优化器模型并行和重计算，并且在图编译阶段，有机融合了 5 个维度的并行。这 5 维并行方式组合起来构成了盘古的并行策略。",{"type":17,"tag":25,"props":140,"children":141},{},[142],{"type":17,"tag":143,"props":144,"children":146},"img",{"alt":7,"src":145},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/73dd39a7cec247ccb72c8ddb5694f17d.jpg",[],{"type":17,"tag":25,"props":148,"children":149},{},[150],{"type":23,"value":151},"a. 数据并行",{"type":17,"tag":25,"props":153,"children":154},{},[155],{"type":23,"value":156},"数据并行是最基本，应用最广的并行方式，其将训练数据（mini-batch）切分，每台设备取得其中一份；每台设备拥有完整的模型。在训练时，每台设备经过梯度计算后，需要经过设备间的梯度同步，然后才能进行模型参数的更新。",{"type":17,"tag":25,"props":158,"children":159},{},[160],{"type":23,"value":161},"b. 算子级模型并行",{"type":17,"tag":25,"props":163,"children":164},{},[165],{"type":23,"value":166},"算子级模型并行是对模型网络中的每个算子涉及到的张量进行切分。MindSpore 对每个算子都独立建模，每个算子可以拥有不同的切分策略。",{"type":17,"tag":25,"props":168,"children":169},{},[170],{"type":23,"value":171},"以矩阵乘算子 MatMul(x, w)为例，x 是训练数据，w 是模型参数，两者都是二维矩阵。并行策略 ((4, 1), (1, 1)) 表示将 x 按行切 4 份，保持 w 不切，如果一共有 4 台设备，那么每台设备拥有一份 x 的切片，和完整的 w。",{"type":17,"tag":25,"props":173,"children":174},{},[175],{"type":23,"value":176},"c.Pipeline 模型并行",{"type":17,"tag":25,"props":178,"children":179},{},[180],{"type":23,"value":181},"Pipeline 模型并行将模型的按层分成多个 stage，再把各个 sage 映射到多台设备上。为了提高设备资源的利用率，又将 mini-batch 划分成多个 micro-batch, 这样就能够使得不同设备在同一时刻处理不同 micro-batch 的数据。",{"type":17,"tag":25,"props":183,"children":184},{},[185],{"type":23,"value":186},"一种 Pipeline 并行方式(Gpipe) 要求反向计算要等所有设备的正向计算完成后才开始，而反向计算可能依赖于正向的输出，导致每个卡正向计算过程中累积的 activation 内存与 micro-batch 数量成正比，从而限制了 micro-batch 的数量。MindSpore 的 Pipeline 并行中，将反向提前，每个 micro-batch 计算完成后，就开始计算反向，有效降低 activation 存储时间，从而提升整体并行效率。",{"type":17,"tag":25,"props":188,"children":189},{},[190],{"type":17,"tag":143,"props":191,"children":193},{"alt":7,"src":192},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/369aaf3e78114966b2eaa974e6bcca66.jpg",[],{"type":17,"tag":25,"props":195,"children":196},{},[197],{"type":23,"value":198},"d. 优化器模型并行",{"type":17,"tag":25,"props":200,"children":201},{},[202],{"type":23,"value":203},"优化器模型并行将优化器涉及到的参数和梯度切分到多台设备上。以 Adam 优化器为例，其内部可能有多份与权重同等大小的「动量」需要参与计算。在数据并行的情况下，每个卡都拥有完整的「动量」，它们在每个卡上都重复计算，造成了内存及计算的浪费。通过引入优化器并行，每个卡只保存权重及「动量」的切片，能降低每个卡的静态内存及提升计算效率。",{"type":17,"tag":25,"props":205,"children":206},{},[207],{"type":17,"tag":143,"props":208,"children":210},{"alt":7,"src":209},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/8e3dd610d90242ed974fb1c09a25a79c.png",[],{"type":17,"tag":25,"props":212,"children":213},{},[214],{"type":23,"value":215},"e. 重计算",{"type":17,"tag":25,"props":217,"children":218},{},[219],{"type":23,"value":220},"重计算 (Rematerialization) 针对正向算子的输出累计保存在内存中，导致内存峰值过大的问题，舍弃了部分正向算子的输出，而是在反向阶段用到时再重新计算一遍。这样做有效地降低了训练过程中的内存使用峰值。如下图所示，第一个内存峰值通过重计算消除，第二个内存峰值可以通过前面讲到的优化器并行消除。",{"type":17,"tag":25,"props":222,"children":223},{},[224],{"type":17,"tag":143,"props":225,"children":227},{"alt":7,"src":226},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/32f8998bda3445579d19aef72f9ff446.png",[],{"type":17,"tag":25,"props":229,"children":230},{},[231],{"type":23,"value":232},"有了这 5 维的并行维度后，如何将其组合起来作用于盘古，并且如何将切分后的模型分片分配到每台设备上仍然是难题。MindSpore 自动并行，把这 5 个维度并行有机组合起来，可以实现非常高效的大模型分布式训练能力",{"type":17,"tag":25,"props":234,"children":235},{},[236],{"type":23,"value":237},"下图 (b) 是一典型的树形的硬件拓扑结构，其带宽随着树深度的增加而降低，并且会产生一些流量冲突。为了利用此特征，MindSpore 的目标是最大化计算通信比，将通信量大的并行方式（算子级并行）放置在服务器内部的多卡之间；将通信量较小（Pipeline 并行）的放置在同一机架内的服务器间；将数据并行（优化器并行）的部分放置在不同机架间，因为该通信可以和计算同时执行(overlap)，对带宽要求较低。",{"type":17,"tag":25,"props":239,"children":240},{},[241],{"type":17,"tag":143,"props":242,"children":244},{"alt":7,"src":243},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/75fbf7c2b76b4951b38beb398e5270e8.jpg",[],{"type":17,"tag":25,"props":246,"children":247},{},[248],{"type":23,"value":249},"在盘古 2000 亿模型中，MindSpore 将 64 层（layer）划分为 16 个 stage，每个 stage 包含 4 层。在每层中，利用算子级并行的方式对张量进行切分。",{"type":17,"tag":25,"props":251,"children":252},{},[253],{"type":23,"value":254},"如下图中的 Q,K,V 的参数在实际中（按列）被切了 8 份，输入张量（按行）被切了 16 份，输出张量因此被切了 128 份（8*16）。重计算配置是配置在每层内的，也就是重计算引入的多余的计算量不会超过一层的计算量。总计，MindSpore 使用了 2048 块昇腾处理器来训练盘古。",{"type":17,"tag":25,"props":256,"children":257},{},[258],{"type":17,"tag":143,"props":259,"children":261},{"alt":7,"src":260},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/3eddd655f23347149c8cc5f94dfccf38.png",[],{"type":17,"tag":25,"props":263,"children":264},{},[265],{"type":23,"value":266},"MindSpore 对外屏蔽了复杂并行实现的细节，使得用户像编写单机模型脚本那样简单。用户在单机脚本的基础上，仅通过少了配置就能实现多维度的混合并行。下图是简化版的盘古脚本，其中红色加粗字体表示的在 MindSpore 中的并行策略。将红色加粗字体去掉，则是单机脚本。",{"type":17,"tag":25,"props":268,"children":269},{},[270],{"type":17,"tag":143,"props":271,"children":273},{"alt":7,"src":272},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/7bb9c4ff3d5042ee868eb4b1db7e7a10.jpg",[],{"type":17,"tag":25,"props":275,"children":276},{},[277],{"type":17,"tag":278,"props":279,"children":280},"strong",{},[281],{"type":23,"value":282},"图算跨层联合优化，发挥硬件极致性能",{"type":17,"tag":25,"props":284,"children":285},{},[286],{"type":23,"value":287},"除了跨节点间的大规模自动外，在单卡节点内，MindSpore 通过图层和算子层的跨层协同优化，来进一步发挥昇腾算力。",{"type":17,"tag":25,"props":289,"children":290},{},[291],{"type":23,"value":292},"在传统的 NN 网络中，不同算子承载的计算量和计算复杂度也各不相同。如 LayerNorm 由 11 个基本算子组成，而 Add 则只有 1 个基本算子。这种基于用户角度的算子定义，通常是无法充分发挥硬件资源计算能力的。因为计算量过大、过复杂的算子，通常很难生成切分较好的高性能算子。从而降低设备利用率；而计算量过小的算子，由于计算无法有效隐藏数据搬移开销，也可能会造成计算的空等时延，从而降低设备利用率。",{"type":17,"tag":25,"props":294,"children":295},{},[296],{"type":23,"value":297},"为了提升硬件利用率，MindSpore 使用了图算融合优化技术，通过图层和算子层联合优化，并将「用户使用角度的易用性算子」进行重组融合，然后转换为「硬件执行角度的高性能算子」，从而充分提升硬件资源利用率，进而提升整网执行性能。具体优化流程如下图所示：",{"type":17,"tag":25,"props":299,"children":300},{},[301],{"type":17,"tag":143,"props":302,"children":304},{"alt":7,"src":303},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/b2822964352b40759345cf2b6d7b0541.png",[],{"type":17,"tag":25,"props":306,"children":307},{},[308],{"type":23,"value":309},"以 LayerNorm 算子为例，通过算子拆分和重组，11 个小算子，组成了 1 个单算子和 2 个融合算子。这些重组后的算子可以生成更加高性能的算子，从而大大降低了整体网络运行时间。",{"type":17,"tag":25,"props":311,"children":312},{},[313],{"type":17,"tag":143,"props":314,"children":316},{"alt":7,"src":315},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/27/d8c20a4ec1064181aad9494f562bb9cb.png",[],{"type":17,"tag":25,"props":318,"children":319},{},[320],{"type":23,"value":321},"在盘古模型中，图算融合帮助整体训练时间减少了 20% 以上。除此之外，对于其它 NLP、CV 等任务，图算融合在优化性能方面都有不错的表现。",{"type":17,"tag":25,"props":323,"children":324},{},[325],{"type":17,"tag":278,"props":326,"children":327},{},[328],{"type":23,"value":329},"总结：训练超大模型的完美体现",{"type":17,"tag":25,"props":331,"children":332},{},[333],{"type":23,"value":334},"即使给我们足够的算力，超大模型的训练还是异常复杂，远比想象中的困难。对于我们一般算法工程师来说，针对某个任务，上亿参数量已经算大的了，但是并不会感到训练上会有什么困难，因为各个深度学习框架直接调用数据并行接口就能搞定。",{"type":17,"tag":25,"props":336,"children":337},{},[338],{"type":23,"value":339},"但是如果模型继续增大到百亿级、千亿级甚至万亿级，并行与优化策略的复杂度猛然上升，算法工程师一点点地编写与优化代码可太难了。MindSpore 通过自动并行，把计算逻辑和并行逻辑解耦，单卡串行代码自动实现分布式并行，从而使得算法科学家将精力都解放到模型本身上。",{"type":17,"tag":25,"props":341,"children":342},{},[343],{"type":23,"value":344},"为了从预训练获取更多的知识， GPT-3 与盘古这样的模型会越来越大，毕竟到现在我们还没看到大模型预训练效果的极限在哪。届时，这类模型对基础设施的需求会更大，并行与优化策略也会更加复杂。只有拥有足够优秀的基础设施，大规模预训练的效果才会更好，从而在知识问答、知识检索、知识推理、阅读理解等场景发挥更大作用，实现智慧客服、营销、文案生成等商业价值。",{"type":17,"tag":25,"props":346,"children":347},{},[348],{"type":23,"value":349},"大规模计算集群及软硬件协同优化，这次在盘古的训练上得到了充分的完美体现。正如开发团队所言，「基于 Mindspore 和昇腾基础软硬件平台在千亿参数模型上的实践也是一次探索，大模型的分布式训练、超参调优、数据集组成、模型结构适应性等都存在太多的未知。现在，盘古模型效果很好，刷新了 clue 版第一，这意味着第一次基于国内软硬件协同优化，以及超大规模分布式训练，结果是令人振奋的，我们自己也具有了足够强的基础设施。」",{"type":17,"tag":25,"props":351,"children":352},{},[353],{"type":23,"value":354},"当然，也诚如以上所言，盘古只是对超大规模分布式训练、超大规模中文预训练模型的一次探索，未来还需要更多的研究工作者投入到通用智能与大规模分布式计算的研究工作中。",{"type":17,"tag":25,"props":356,"children":357},{},[358],{"type":17,"tag":359,"props":360,"children":361},"em",{},[362,364],{"type":23,"value":363},"2千亿参数中文预训练语言盘古模型文档代码地址：",{"type":17,"tag":365,"props":366,"children":370},"a",{"href":367,"rel":368},"https://gitee.com/mindspore/mindspore/tree/r1.2/model_zoo/official/nlp/pangu_alpha",[369],"nofollow",[371],{"type":23,"value":367},{"title":7,"searchDepth":373,"depth":373,"links":374},4,[],"markdown","content:news:zh:451.md","content","news/zh/451.md","news/zh/451","md",1776506092460]