[{"data":1,"prerenderedAt":256},["ShallowReactive",2],{"content-query-TTiWv1888Z":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":250,"_id":251,"_source":252,"_file":253,"_stem":254,"_extension":255},"/technology-blogs/zh/2026-2-4","zh",false,"","HyperParallel声明式并行能力解读","本文将拆解其核心能力，解读高效解决方案。","2026-2-4","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/11/28/8e0e0150508a4c5ba4287fa3bec8ea3f.png","technology-blogs","技术解读",{"type":15,"children":16,"toc":247},"root",[17,26,32,52,57,62,67,78,83,89,94,99,106,111,116,123,129,134,141,146,152,157,162,169,174,181,187,192,197,202,207,212,217,222,232,237,242],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"_01-张量排布表达用layout解锁分布式张量的灵活姿态",[23],{"type":24,"value":25},"text","01 张量排布表达：用Layout解锁分布式张量的“灵活姿态”",{"type":18,"tag":27,"props":28,"children":29},"p",{},[30],{"type":24,"value":31},"我们使用Layout来表达张量的分布式排布，内部包含3个成员：",{"type":18,"tag":33,"props":34,"children":35},"ul",{},[36,42,47],{"type":18,"tag":37,"props":38,"children":39},"li",{},[40],{"type":24,"value":41},"device_matrix: 描述集群中卡号的逻辑排布方式",{"type":18,"tag":37,"props":43,"children":44},{},[45],{"type":24,"value":46},"alias_name：device_matrix中每个轴的别名",{"type":18,"tag":37,"props":48,"children":49},{},[50],{"type":24,"value":51},"rank_list（可选）：device_matrix对应的设备列表",{"type":18,"tag":27,"props":53,"children":54},{},[55],{"type":24,"value":56},"比如：layout = Layout((4, 2), (\"dp\", \"mp\"), (0, 1, 2, 3, 4, 5, 6, 7))",{"type":18,"tag":27,"props":58,"children":59},{},[60],{"type":24,"value":61},"HyperParallel引入DTensor结构管理分布式张量的状态。通过声明式接口DTensor.from_local方法表达从local tensor创建一个DTensor。",{"type":18,"tag":27,"props":63,"children":64},{},[65],{"type":24,"value":66},"不同切分形态之间可以通过重排进行转换，示意如下：",{"type":18,"tag":68,"props":69,"children":71},"div",{"style":70},"text-align: center;",[72],{"type":18,"tag":73,"props":74,"children":77},"img",{"src":75,"style":76,"alt":7},"/category/information/technology-blogs/banner/2026-2-4/1.jpg","display: block;margin: 0 auto;max-width:70%",[],{"type":18,"tag":27,"props":79,"children":80},{},[81],{"type":24,"value":82},"DTensor支持通过DTensor.redistribute接口与期望的分布式排布Layout，显式触发张量重排布流程。",{"type":18,"tag":19,"props":84,"children":86},{"id":85},"_02-算子级并行hyperparallel的核心引擎让多卡协同更高效",[87],{"type":24,"value":88},"02 算子级并行：HyperParallel的“核心引擎”，让多卡协同更高效",{"type":18,"tag":27,"props":90,"children":91},{},[92],{"type":24,"value":93},"算子级并行即将算子的输入张量切分到多张卡上，使得每张卡计算一个分片，其汇总的结果是整个算子的计算结果。神经网络结构可以简单划分为三个部分：正向、反向、优化器。",{"type":18,"tag":27,"props":95,"children":96},{},[97],{"type":24,"value":98},"为了能够简化分布式训练当中针对并行场景的配置与适配，HyperParallel提供了声明式接口Cell.shard，使用该接口时，为Cell的输入，输出，以及Cell内的参数配置张量的分布式排布方式。HyperParallel将屏蔽分布式任务的细节，自动处理Cell内部参数的切分以及DTensor的重排布。相较于Megatron模型逻辑与分布式逻辑耦合的写法，采用HyperParallel声明式编程方式能极大简化开发的复杂度。",{"type":18,"tag":68,"props":100,"children":101},{"style":70},[102],{"type":18,"tag":73,"props":103,"children":105},{"src":104,"style":76,"alt":7},"/category/information/technology-blogs/banner/2026-2-4/2.jpg",[],{"type":18,"tag":27,"props":107,"children":108},{},[109],{"type":24,"value":110},"HyperParallel库提供接口parallelize_value_and_grad正确处理声明式并行场景下反向梯度的运算，解决反向sens值处理难题，保障训练精度。",{"type":18,"tag":27,"props":112,"children":113},{},[114],{"type":24,"value":115},"整体设计流程图如下：",{"type":18,"tag":68,"props":117,"children":118},{"style":70},[119],{"type":18,"tag":73,"props":120,"children":122},{"src":121,"style":76,"alt":7},"/category/information/technology-blogs/banner/2026-2-4/3.jpg",[],{"type":18,"tag":19,"props":124,"children":126},{"id":125},"_03-自定义并行灵活补位适配个性化并行需求",[127],{"type":24,"value":128},"03 自定义并行：灵活补位，适配个性化并行需求",{"type":18,"tag":27,"props":130,"children":131},{},[132],{"type":24,"value":133},"自定义并行是对算子级并行能力的补充。在算子级并行中，框架自动完成了算子切分后为保持数学等价性每个设备所需执行的算子序列推导及生成相关的集合通信操作。而自定义并行则支持用户手写并行接入dtensor并行流程HyperParallel提供声明式接口hyper_parallel.custom_shard，允许用户需提供一个自定义函数，并描述其输入/输出张量的切分形态。自定义函数是站在local tensor视角进行编程，内部可以显式调用集合通信接口，而其输入/输出通过切分形态的描述与前后的算子级并行进行衔接。",{"type":18,"tag":68,"props":135,"children":136},{"style":70},[137],{"type":18,"tag":73,"props":138,"children":140},{"src":139,"style":76,"alt":7},"/category/information/technology-blogs/banner/2026-2-4/4.jpg",[],{"type":18,"tag":27,"props":142,"children":143},{},[144],{"type":24,"value":145},"如图所示，框架将输入的global tensor转换成local tensor后进入自定义函数，自定义函数体内以local tensor视角进行编程并可以显式调用集合通信，最后框架再将自定义函数输出的local tensor转换成global tensor。",{"type":18,"tag":19,"props":147,"children":149},{"id":148},"_04-自定义并行平衡通信与内存破解冗余痛点",[150],{"type":24,"value":151},"04 自定义并行：平衡通信与内存，破解冗余痛点",{"type":18,"tag":27,"props":153,"children":154},{},[155],{"type":24,"value":156},"数据并行是常用的分布式方式，核心是拆分训练数据至各节点独立计算，再通过AllReduce同步梯度。但传统方式存在模型参数与优化器状态的存储、计算冗余，制约效率。",{"type":18,"tag":27,"props":158,"children":159},{},[160],{"type":24,"value":161},"HyperParallel提供HSDP能力，通过使用声明式接口hyper_parallel.hsdp，配置shard_level为level1，level2，level3，以达到对标zero1， zero2， zero3级别的优化能力。",{"type":18,"tag":68,"props":163,"children":164},{"style":70},[165],{"type":18,"tag":73,"props":166,"children":168},{"src":167,"style":76,"alt":7},"/category/information/technology-blogs/banner/2026-2-4/5.jpg",[],{"type":18,"tag":27,"props":170,"children":171},{},[172],{"type":24,"value":173},"使用参考如下配置：",{"type":18,"tag":68,"props":175,"children":176},{"style":70},[177],{"type":18,"tag":73,"props":178,"children":180},{"src":179,"style":76,"alt":7},"/category/information/technology-blogs/banner/2026-2-4/6.jpg",[],{"type":18,"tag":19,"props":182,"children":184},{"id":183},"_05-流水线并行分阶段并行释放多设备协同潜力",[185],{"type":24,"value":186},"05 流水线并行：分阶段并行，释放多设备协同潜力",{"type":18,"tag":27,"props":188,"children":189},{},[190],{"type":24,"value":191},"Pipeline（流水线）并行是将神经网络的算子按照一定规则切分为多个 Stage，并将Stage 映射到不同的设备上，这样，可以使不同的设备去计算神经网络的不同部分，从而降低单卡内的显存负载，使得大模型能够运行起来。",{"type":18,"tag":27,"props":193,"children":194},{},[195],{"type":24,"value":196},"当前的流水线并行处理流程可分为以下四个步骤：",{"type":18,"tag":27,"props":198,"children":199},{},[200],{"type":24,"value":201},"1.模型切分(Split Model): 将完整模型根据一定规则进行切分，将其裁剪为当前 Stage 需要计算的子模型。",{"type":18,"tag":27,"props":203,"children":204},{},[205],{"type":24,"value":206},"2.Stage 实例化: 根据切分完的模型和相关的模型参数构造 Stage 实例。",{"type":18,"tag":27,"props":208,"children":209},{},[210],{"type":24,"value":211},"3.Schedule 实例化: 指定具体的Schedule算法， 进行对应Schedule的实例化。",{"type":18,"tag":27,"props":213,"children":214},{},[215],{"type":24,"value":216},"4.执行调度 (Schdule.step()): 根据不同的调度算法，执行对应的调度策略。",{"type":18,"tag":27,"props":218,"children":219},{},[220],{"type":24,"value":221},"在声明式编程方式下，模型切分的规则由开发者设置，具体地，开发者需要考虑每个 stage需要计算的子模型，接着对这部分子模型进行实例化。参考脚本可如下所示：",{"type":18,"tag":223,"props":224,"children":226},"pre",{"code":225},"class Transformer(nn.Cell):\n    def __init__(self, num layers):\n        super().__init__()\n        self.embedding = nn.Embedding(...)\n        self.layers = nn.CellDict()\n        for layer_id in range(num_layers):\n            self.layers[str(layer_id)] = TransformerBlock(...)\n        \n        self.output = nn.Linear(...)\n    \n    def construct(self, input_ids):\n        h = self.embedding(input_ids) if self.embedding else input_ids        \n        for layer in self.layers.values():\n            h = layer(h)\n    \n        output = self.output(h) if self.output else h\n        return output\n    \n    def model_split_manual(model, stage_index):\n        \"\"\"假设mode[共有8层，共2个stage, 每个stage有4层\"\"\"\n        if stage_index == O:\n            for i in range(4, 8):\n                del model.layers[str(i)]\n            model.output = None\n        else:\n            for i range(4):\n                del model.layers[str(i)]\n            model.embedding = None\n",[227],{"type":18,"tag":228,"props":229,"children":230},"code",{"__ignoreMap":7},[231],{"type":24,"value":225},{"type":18,"tag":27,"props":233,"children":234},{},[235],{"type":24,"value":236},"可以看到，模型在构造时，会根据不同的stage_index来进行模型的切分，其中stage0包含Embedding层以及0-3层TransformerBlock，stage1包含Output层以及4-7层TransformerBlock。",{"type":18,"tag":27,"props":238,"children":239},{},[240],{"type":24,"value":241},"模型切分完成之后，需要进行PipelineStage类型的实例化。在其内部完成了P2P通信算子的构造，P2P通信buffer的创建和析构、正反向函数的构建以及正反向cache的自动管理，方便开发者关注流水线并行调度逻辑的开发；流水线并行调度通常包括MicroBatch的切分，MicroBatch的执行，以及输出output的处理。为了方便用户自定义执行序调度，PipelineSchedule提供了一套自定义调度的接口。",{"type":18,"tag":27,"props":243,"children":244},{},[245],{"type":24,"value":246},"目前，HyperParallel已内置GPipe，1F1B, VPP等常用调度，也支持用户利用调度接口进行自定义调度。",{"title":7,"searchDepth":248,"depth":248,"links":249},4,[],"markdown","content:technology-blogs:zh:2026-2-4.md","content","technology-blogs/zh/2026-2-4.md","technology-blogs/zh/2026-2-4","md",1776506119647]