[{"data":1,"prerenderedAt":248},["ShallowReactive",2],{"content-query-DGb0Amp1RB":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":242,"_id":243,"_source":244,"_file":245,"_stem":246,"_extension":247},"/news/zh/642","zh",false,"","中科院自动化所基于MindSpore推出全球首个三模态预训练模型","全球首个图文音（视觉-文本-语音）三模态预训练模型！","2021-07-08","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/07/08/90ab0670c33b4c7aaafc269741b48057.png","news",{"type":14,"children":15,"toc":239},"root",[16,24,30,35,40,45,50,58,65,70,75,80,91,101,111,116,121,126,131,136,141,146,156,166,171,186,198,203,208,213,218],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"中科院自动化所基于mindspore推出全球首个三模态预训练模型",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"日前，中国科学院自动化所（简称“自动化所”）基于全场景AI计算框架MindSpore训练完成全球首个图文音（视觉-文本-语音）三模态预训练模型（OPT-Omni-Perception pre-Trainer），该模型同时具备跨模态理解与跨模态生成能力，标志着预训练模型工作获得突破性进展。",{"type":17,"tag":25,"props":31,"children":32},{},[33],{"type":23,"value":34},"自GPT/Bert模型提出后，预训练模型迎来了爆发式发展，其具有在无监督情况下自动学习不同任务、并快速迁移到不同领域数据的强大能力，而多模态预训练模型被广泛认为是从限定领域的弱人工智能迈向通用人工智能的路径探索。然而，互联网音视频数据呈高速增长，占比超过80%，纯文本的预训练模型只涵盖了互联网数据中的较少部分，更丰富的语音、图像、视频等数据并未被充分利用与学习，且人类的信息获取、环境感知、知识学习与表达，都是通过多模态信息方式来执行的。OpenAI 联合创始人、首席科学家 Ilya Sutskever 在推特上发文表示，“人工智能的长期目标是构建多模态神经网络，即AI能够学习不同模态之间的概念，从而更好地理解世界”。为实现更加通用的人工智能模型，预训练模型必然由单模态往多模态方向发展，将文本、语音、图像、视频等多模态内容联合起来进行学习。自动化所瞄准这一方向，成功构建视觉-文本-语音三模态预训练模型。",{"type":17,"tag":25,"props":36,"children":37},{},[38],{"type":23,"value":39},"目前已有的多模态预训练模型通常仅考虑两个模态（如图像和文本，或者视频和文本），忽视了周围环境中普遍存在的语音信息，并且模型极少兼具理解与生成能力，难以在生成任务与理解类任务中同时取得良好表现。针对这些问题，自动化此次提出的视觉-文本-语音三模态预训练模型采用分别基于词条级别(Token-level)、模态级别(Modality-level)以及样本级别(Sample-level)的多层次、多任务子监督学习框架，更关注图-文-音三模态数据之间的关联特性以及跨模态转换问题，对更广泛、更多样的下游任务提供模型基础支撑。该模型不仅可实现跨模态理解（比如图像识别、语音识别等任务），也能完成跨模态生成（比如从文本生成图像、从图像生成文本、语音生成图像等任务）。灵活的自监督学习框架可同时支持三种或任两种模态弱关联数据进行预训练，有效降低了多模态数据收集与清洗成本。",{"type":17,"tag":18,"props":41,"children":43},{"id":42},"三模态预训练模型基本原理",[44],{"type":23,"value":42},{"type":17,"tag":25,"props":46,"children":47},{},[48],{"type":23,"value":49},"自动化所首次提出了视觉-文本-语音三模态预训练模型，实现了三模态间相互转换和生成。其核心原理是视觉、文本、语音不同模态通过各自编码器映射到统一语义空间，然后通过多头自注意力机制（Multi-head Self-attention）学习模态之间的语义关联以及特征对齐，形成多模态统一知识表示，再利用编码后的多模态特征，然后通过多头自注意力机制进行通过解码器分别生成文本、图像和语音。这里三模态互相转化和相互生成示意如图1所示：",{"type":17,"tag":25,"props":51,"children":52},{},[53],{"type":17,"tag":54,"props":55,"children":57},"img",{"alt":7,"src":56},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/07/08/a21cc2daab0f4f56b80644f28f3f9c21.jpg",[],{"type":17,"tag":25,"props":59,"children":60},{},[61],{"type":17,"tag":54,"props":62,"children":64},{"alt":7,"src":63},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/07/08/b2d0360c80834def8db915f7d3ed47ae.jpg",[],{"type":17,"tag":25,"props":66,"children":67},{},[68],{"type":23,"value":69},"图文音三模态相互转换与生成",{"type":17,"tag":18,"props":71,"children":73},{"id":72},"多层次多任务自监督预训练学习",[74],{"type":23,"value":72},{"type":17,"tag":25,"props":76,"children":77},{},[78],{"type":23,"value":79},"自动化提出的三模态预训练模型由单模态编码器、跨模态编码器和跨模态解码器构成。针对图文音三模态数据，我们提出三级预训练自监督学习方式：词条级别 (Token-level,Modality-level)，模态级（Modality-level masking）以及样本级别（Sample-level masking) 。具体包括：",{"type":17,"tag":25,"props":81,"children":82},{},[83,89],{"type":17,"tag":84,"props":85,"children":86},"strong",{},[87],{"type":23,"value":88},"（1）词条级别(Token-level)学习",{"type":23,"value":90},"：（a）文本掩码建模(Masked Language Modeling)：随机掩盖一些文本单词，需要模型根据上下文预测被掩盖的单词是什么；（b）视觉掩码建模(Masked Vision Modeling)：随机掩盖一些图像区域，让模型预测被掩盖的区域；（c）语音掩码建模(Masked Audio Modeling)：随机掩盖一些语音词条(token)，模型需要预测被掩盖的词条(token)是什么。",{"type":17,"tag":25,"props":92,"children":93},{},[94,99],{"type":17,"tag":84,"props":95,"children":96},{},[97],{"type":23,"value":98},"（2）模态级别(Modality-level)学习",{"type":23,"value":100},"：包括文本重构和图像重构两个任务，分别学习重构输入文本和图像。团队引入模态级别掩码(Modality-Level Masking)机制随机地掩盖一个模态信息，使得模型需要根据其他模态信息对当前模态进行重构，从而能够进行下游的跨模态生成任务。这个机制也带来另一个好处—它使模型不仅能够处理三模态输入，也能处理两模态输入，从而适应下游的两模态任务。",{"type":17,"tag":25,"props":102,"children":103},{},[104,109],{"type":17,"tag":84,"props":105,"children":106},{},[107],{"type":23,"value":108},"（3）样本级别(Sample-level)学习",{"type":23,"value":110},"：该预训练任务是通过对每个样本随机地替换三种模态信息中的一种或两种，让模型来预测替换哪些模态。",{"type":17,"tag":18,"props":112,"children":114},{"id":113},"多维度自动混合并行极简训练",[115],{"type":23,"value":113},{"type":17,"tag":25,"props":117,"children":118},{},[119],{"type":23,"value":120},"训练多模态大模型，用户需综合考虑模型参数量、计算量、计算类型、集群带宽拓扑和样本数量等才能设计出性能较优的并行切分策略，在考虑模型编码算法以外，还需要编写大量并行切分和通信代码。",{"type":17,"tag":25,"props":122,"children":123},{},[124],{"type":23,"value":125},"MindSpore是业界首个支持全自动并行的AI计算框架，从如下维度进行多模态模型的加速训练。（1）MindSpore同时使用数据并行、算子级模型并行、Pipeline模型并行、优化器模型并行、异构并行、重计算、高效内存复用多维度、全种类的分布式并行策略；（2）依托多种类、多维度的并行策略，原创集群拓扑感知的多维度自动混合并行，实现超大模型自动切分，取得了比人工编写切分策略更优的结果，显著提升集群加速能力；（3）基于多维混合自动并行技术，原创新的DNN分布式并行编程范式，实现一行代码完成串行算法到并行算法的切换，使得开发者可以专注算法的研究；",{"type":17,"tag":25,"props":127,"children":128},{},[129],{"type":23,"value":130},"基于上述优势，MindSpore为复杂的多模态大模型提供了极好的训练加速能力，同时也极大减少了系统性能优化的代价，大大缩短了代码开发、调试和训练的周期。",{"type":17,"tag":18,"props":132,"children":134},{"id":133},"实验结果",[135],{"type":23,"value":133},{"type":17,"tag":25,"props":137,"children":138},{},[139],{"type":23,"value":140},"自动化所主要采用Open Images数据集作为预训练数据，该数据包含图像、文本与音频数据。此外我们也额外地使用两模态数据，如Conceptual Caption图文数据集，Visual Genome图文数据集等。当加入额外的两模态数据时，这些两模态与三模态数据则被随机混合进行训练。",{"type":17,"tag":25,"props":142,"children":143},{},[144],{"type":23,"value":145},"自动化所主要进行了以下两方面的实验验证：",{"type":17,"tag":25,"props":147,"children":148},{},[149,154],{"type":17,"tag":84,"props":150,"children":151},{},[152],{"type":23,"value":153},"（1）图文音三模态关联编码与相互生成性能",{"type":23,"value":155},"：分别在多模态融合的图像分类、任意两模态的相互检索以及语音识别任务中，与常规全监督方法进行了性能比较，均取得了性能上的显著提升。其中在多模态融合的图像分类任务中，与常规全监督的Resnet101网络模型相比，性能提升5%；加入语音模态信息能够明显提升以文搜图的性能，验证了联合建模视觉-文本-语音三模态信息的必要性。",{"type":17,"tag":25,"props":157,"children":158},{},[159,164],{"type":17,"tag":84,"props":160,"children":161},{},[162],{"type":23,"value":163},"（2）多模态下游任务性能",{"type":23,"value":165},"：分别在跨模态检索、视觉问答与图像语义描述任务中，与",{"type":17,"tag":25,"props":167,"children":168},{},[169],{"type":23,"value":170},"当前最新的图文两模态预训练模型进行了性能比较，在补充了图文两模态数据参与预训练的模型上，取得了具有竞争力甚至更好的实验性能。",{"type":17,"tag":25,"props":172,"children":173},{},[174,178,180,184],{"type":17,"tag":54,"props":175,"children":177},{"alt":7,"src":176},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/07/08/a90d9adfd00f49a9b407c5606a47e8db.jpg",[],{"type":23,"value":179}," ",{"type":17,"tag":54,"props":181,"children":183},{"alt":7,"src":182},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/07/08/880b7c6a84954110b857b49b07acfd0b.jpg",[],{"type":23,"value":185}," 以图生音示例（短视频）",{"type":17,"tag":25,"props":187,"children":188},{},[189,193,194],{"type":17,"tag":54,"props":190,"children":192},{"alt":7,"src":191},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/07/08/3a781f0d62164b09b83f69cddca55175.jpg",[],{"type":23,"value":179},{"type":17,"tag":54,"props":195,"children":197},{"alt":7,"src":196},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/07/08/42d98e6d512c4961a620829ec702e4d4.jpg",[],{"type":17,"tag":25,"props":199,"children":200},{},[201],{"type":23,"value":202},"以音生图示例（短视频）",{"type":17,"tag":18,"props":204,"children":206},{"id":205},"总结",[207],{"type":23,"value":205},{"type":17,"tag":25,"props":209,"children":210},{},[211],{"type":23,"value":212},"三模态预训练模型的提出将改变当前单一模型对应单一任务的人工智研发范式，三模态图文音的统一语义表达将大幅提升文本、语音、图像和视频等领域的基础任务性能，并在多模态内容的理解、搜索、推荐和问答，语音识别和合成，人机交互和无人驾驶等商业应用中具有潜力巨大的市场价值。",{"type":17,"tag":25,"props":214,"children":215},{},[216],{"type":23,"value":217},"“大数据+大模型+多模态”多任务统一学习将引领就技术发展的潮流，中科院自动化所所长徐波将在2021世界人工智能大会（WAIC）昇腾人工智能高峰论坛上介绍跨模态通用人工智能平台，更多信息敬请关注。",{"type":17,"tag":25,"props":219,"children":220},{},[221,226,228,237],{"type":17,"tag":84,"props":222,"children":223},{},[224],{"type":23,"value":225},"文章转自",{"type":23,"value":227},"**：",{"type":17,"tag":229,"props":230,"children":234},"a",{"href":231,"rel":232},"https://www.toutiao.com/i6982017549063979558/?tt_from=weixin&utm_campaign=client_share&wxshare_count=7&timestamp=1625632239&app=news_article&utm_source=weixin&utm_medium=toutiao_android&use_new_=1&req_id=202107071230380101351571951E02DF98&share_token=963cfed7-6279-464f-8f8f-a5863944df95&group_id=6982017549063979558",[233],"nofollow",[235],{"type":23,"value":236},"https://www.toutiao.com/i6982017549063979558/?tt_from=weixin&utm_campaign=client_share&wxshare_count=7×tamp=1625632239&app=news_article&utm_source=weixin&utm_medium=toutiao_android&use_new_style=1&req_id=202107071230380101351571951E02DF98&share_token=963cfed7-6279-464f-8f8f-a5863944df95&group_id=6982017549063979558",{"type":23,"value":238},"**",{"title":7,"searchDepth":240,"depth":240,"links":241},4,[],"markdown","content:news:zh:642.md","content","news/zh/642.md","news/zh/642","md",1776506093150]