[{"data":1,"prerenderedAt":867},["ShallowReactive",2],{"content-query-YvCJKCfuKh":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":861,"_id":862,"_source":863,"_file":864,"_stem":865,"_extension":866},"/technology-blogs/zh/3539","zh",false,"","开源之夏系列 | 基于MindSpore的BitsAndBytes量化框架实现","开源之夏，是由中国科学院软件研究所发起，专为高校学生精心打造的活动。旨在鼓励广大学子积极参与开源软件的开发与维护，推动优秀开源软件社区的蓬勃发展。","2024-12-19","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2025/01/08/c9f493629ce84ec991f153305d2fd435.png","technology-blogs","实践",{"type":15,"children":16,"toc":858},"root",[17,25,30,35,43,52,57,62,75,80,85,90,98,103,111,116,124,137,145,153,170,175,182,187,192,197,202,207,215,223,228,235,240,248,256,283,291,296,301,308,313,320,325,330,337,342,347,354,359,364,369,374,379,387,392,397,405,413,418,425,430,435,442,447,452,457,465,470,475,483,488,493,500,505,510,517,522,527,534,539,544,549,554,561,566,571,578,583,588,596,604,609,616,621,626,634,642,647,654,659,666,671,676,681,688,693,698,703,710,715,720,728,741,746,751,756,770,778,783,788,793,798,803,808,813,818,823,828,833,838,843,848,853],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"开源之夏系列-基于mindspore的bitsandbytes量化框架实现",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":28},"p",{},[29],{"type":24,"value":9},{"type":18,"tag":26,"props":31,"children":32},{},[33],{"type":24,"value":34},"目前，开源之夏2024已圆满结项！在本届开源之夏中，不少开发者跟随昇思MindSpore一起，在开源的世界里畅游，成功完成项目任务。在此，昇思 MindSpore 开源社区邀请了开源之夏的开发者们，分享他们在本次活动中的宝贵经验与心得。我们希望通过这些精彩的项目经历和实战技巧，能够激发更多创意火花，帮助大家提升技术能力。本文为昇思MindSpore 开源之夏项目经验分享系列第2篇。",{"type":18,"tag":26,"props":36,"children":37},{},[38],{"type":18,"tag":39,"props":40,"children":42},"img",{"alt":7,"src":41},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/1299d20ab1244bdaa7314eba162da949.png",[],{"type":18,"tag":26,"props":44,"children":45},{},[46],{"type":18,"tag":47,"props":48,"children":49},"strong",{},[50],{"type":24,"value":51},"项目基本介绍",{"type":18,"tag":26,"props":53,"children":54},{},[55],{"type":24,"value":56},"1、项目名称：基于MindSpore的BitsAndBytes量化框架实现",{"type":18,"tag":26,"props":58,"children":59},{},[60],{"type":24,"value":61},"2、项目导师：Candyhong",{"type":18,"tag":26,"props":63,"children":64},{},[65,67],{"type":24,"value":66},"3、项目链接：",{"type":18,"tag":68,"props":69,"children":73},"a",{"href":70,"rel":71},"https://summer-ospp.ac.cn/org/prodetail/24c6d0486?list=org&navpage=org",[72],"nofollow",[74],{"type":24,"value":70},{"type":18,"tag":26,"props":76,"children":77},{},[78],{"type":24,"value":79},"4、项目描述：在大模型时代，算法对计算机存储和算力的要求与日俱增，导致模型部署的成本也相应地成倍增加。量化、剪枝、蒸馏、神经架构搜索等方法是模型轻量化的常用方法，目的都是为了降低计算成本，提升计算性能，其中模型量化技术把模型中的高精度运算（比如FP32）替换为低精度运算（如INT8、INT4、FP4、NF4等），并通过插入反量化节点、量化感知训练等方法使量化过程中的精度损失尽可能更少，大大提升了特别是在端侧的显存压力，提高了模型推理的性能。",{"type":18,"tag":26,"props":81,"children":82},{},[83],{"type":24,"value":84},"BitsAndBytes library（以下简称为“bnb”）是一个十分经典且常用的量化库，很早就被Hugging Face的Transformers套件所集成。它是一个封装CUDA自定义函数的轻量级Python wrapper，特别是8位优化器，矩阵乘法（LLM.int8()）以及8位和4位量化函数。该库包括用于8位和4位操作的量化原语，bitsandbytes.nn.Linear8bitLt和bitsandbytes.nn.Linear4bit以及bitsandbytes.optim优化器模块。",{"type":18,"tag":26,"props":86,"children":87},{},[88],{"type":24,"value":89},"项目要求基于昇思MindSpore自定义算子的GPU版本量化算子开发以及BitsAndBytes库对标量化能力实现。促进MindSpore NLP的量化特性的支持，提高套件的易用性，形成一个面向MindSpore NLP模型推理的量化库MindSpore BNB，这样在消费级显卡上也能愉快地进行LLM的推理了。",{"type":18,"tag":26,"props":91,"children":92},{},[93],{"type":18,"tag":47,"props":94,"children":95},{},[96],{"type":24,"value":97},"项目选择初衷",{"type":18,"tag":26,"props":99,"children":100},{},[101],{"type":24,"value":102},"本科的时候通过华为“智能基座”项目了解到了MindSpore框架，在2022年10月开始担任重庆大学智能基座协会的负责人，对华为公司的ICT技术生态有了较为深入的了解，后来通过昇思MindSpore开源实习进入到MindSpore开源社区主要参与MindSpore NLP套件的开发，并成为MindSpore NLP SIG的主要成员，此前在MindSpore NLP套件负责人吕昱峰老师的指导下我主要自顶向下完成了多个LLM的模型迁移工作，模型微调工作、以及MindSpore NLP套件中Flash Attention算子在GPU后端的移植与调优。同样也是应MindSpore NLP开发的要求，需要拓展GPU后端的量化能力，同时我在之前关于Flash Attention算子的工作中也有了开发MindSpore的Custom自定义算子的经验，并且目前的研究也与模型量化相关，于是就参加了本次开源之夏的这个项目。",{"type":18,"tag":26,"props":104,"children":105},{},[106],{"type":18,"tag":47,"props":107,"children":108},{},[109],{"type":24,"value":110},"项目方案介绍",{"type":18,"tag":26,"props":112,"children":113},{},[114],{"type":24,"value":115},"项目目的是基于bnb实现一个MindSpore可用的量化库，设计成一个量化接口集成在MindSpore NLP里，可直接对加载后的模型进行训练后量化（PTQ），提高模型的推理性能。为了实现上述目的，由于bnb是跟PyTorch紧耦合的，需要从CUDA算子开始，逐层向上迁移，处理因不同框架的不同特性导致的运行时不兼容。核心是通过MindSpore所支持的Custom自定义算子模版，将bnb库中实现的众多CUDA算子迁移到MindSpore NLP套件中，作为MindSpore BNB量化库的核心组成部分。",{"type":18,"tag":26,"props":117,"children":118},{},[119],{"type":18,"tag":47,"props":120,"children":121},{},[122],{"type":24,"value":123},"项目分析",{"type":18,"tag":26,"props":125,"children":126},{},[127,129,135],{"type":24,"value":128},"为了确保迁移开发过程顺利，保障结果的可靠性，在进行编码之前先做三件事：1. 阅读bnb量化方法的论文，见 ",{"type":18,"tag":68,"props":130,"children":133},{"href":131,"rel":132},"https://arxiv.org/abs/2208.07339",[72],[134],{"type":24,"value":131},{"type":24,"value":136}," ，了解量化的具体原理，这样才知道正在做什么，哪一部分内容或者代码是最关键的。2. 要实现bnb库的迁移，了解MindSpore对量化相关操作的支持现状3. 本地编译bnb源码，了解项目构建过程、代码实现、开发所用工具链等信息。",{"type":18,"tag":26,"props":138,"children":139},{},[140],{"type":18,"tag":47,"props":141,"children":142},{},[143],{"type":24,"value":144},"0****1",{"type":18,"tag":26,"props":146,"children":147},{},[148],{"type":18,"tag":47,"props":149,"children":150},{},[151],{"type":24,"value":152},"技术原理",{"type":18,"tag":26,"props":154,"children":155},{},[156,158,162,164,168],{"type":24,"value":157},"bnb中的量化方法主要是聚焦于对outlier离群值的处理，因为激活值中往往存在这样一些绝对值明显更大的离群值，离群值往往分布在少量特征中，即为离群特征。以激活",{"type":18,"tag":39,"props":159,"children":161},{"alt":7,"src":160},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/ea57daf7f7e844b9b6020182237656d2.png",[],{"type":24,"value":163},"和权重",{"type":18,"tag":39,"props":165,"children":167},{"alt":7,"src":166},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/ab78ad7cc18d456cb3ba80de4c790e41.png",[],{"type":24,"value":169}," 的矩阵相乘为例，特征维度就是指h这个维度。不论是 per-token（针对激活 x 而言：每行对应一个量化系数） 还是 per-channel （针对权重 w 而言：每列对应一个量化系数）量化，都会受到这些离群值的很大影响。既然只有少量的特征包含离群值，LLM.in8() 的思路是把这些特征拿出来单独计算，只对剩余特征做量化。",{"type":18,"tag":26,"props":171,"children":172},{},[173],{"type":24,"value":174},"LLM.int8()是一种采用混合精度分解的量化方法。先做一个矩阵分解，对绝大部分权重和激活用8bit量化（vector-wise）。过程中对离群特征的几个维度保留16bit，对其做高精度的矩阵乘法。计算示意图如下：",{"type":18,"tag":26,"props":176,"children":177},{},[178],{"type":18,"tag":39,"props":179,"children":181},{"alt":7,"src":180},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/9a2b857f7c0b4677bccf92bd825c5a4a.png",[],{"type":18,"tag":26,"props":183,"children":184},{},[185],{"type":24,"value":186},"图 1 LLM.int8()计算示意图[1]",{"type":18,"tag":26,"props":188,"children":189},{},[190],{"type":24,"value":191},"主要通过三个步骤完成计算过程：",{"type":18,"tag":26,"props":193,"children":194},{},[195],{"type":24,"value":196},"1、从输入的隐状态中，按列提取离群特征。",{"type":18,"tag":26,"props":198,"children":199},{},[200],{"type":24,"value":201},"2、对离群特征进行 FP16矩阵运算，对非离群特征进行量化，做 INT8 矩阵运算。",{"type":18,"tag":26,"props":203,"children":204},{},[205],{"type":24,"value":206},"3、反量化非离群值的矩阵乘结果，并与离群值矩阵乘结果相加，获得最终的 FP16 结果。",{"type":18,"tag":26,"props":208,"children":209},{},[210],{"type":18,"tag":47,"props":211,"children":212},{},[213],{"type":24,"value":214},"0****2",{"type":18,"tag":26,"props":216,"children":217},{},[218],{"type":18,"tag":47,"props":219,"children":220},{},[221],{"type":24,"value":222},"MindSpore低精度量化支持",{"type":18,"tag":26,"props":224,"children":225},{},[226],{"type":24,"value":227},"可以从官方文档中查阅到，MindSpore目前还没有4位的数据类型，所以本项目聚焦于将bnb中的8位量化能力迁移到MindSpore中，将bnb中的CUDA算子通过Custom算子自定义算子接入到MindSpore框架，以此为基础实现Linear8bitLt，并结合MindSpore NLP套件实现相应的Linear替换策略，使大模型能够使用此量化方法简单方便地进行模型的压缩和推理，提升MindSpore在GPU算子上的兼容性。",{"type":18,"tag":26,"props":229,"children":230},{},[231],{"type":18,"tag":39,"props":232,"children":234},{"alt":7,"src":233},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/b90abc97558c4653890b43df792a6319.png",[],{"type":18,"tag":26,"props":236,"children":237},{},[238],{"type":24,"value":239},"图 2 mindspore.dtype[2]",{"type":18,"tag":26,"props":241,"children":242},{},[243],{"type":18,"tag":47,"props":244,"children":245},{},[246],{"type":24,"value":247},"0****3",{"type":18,"tag":26,"props":249,"children":250},{},[251],{"type":18,"tag":47,"props":252,"children":253},{},[254],{"type":24,"value":255},"bnb****项目分析",{"type":18,"tag":26,"props":257,"children":258},{},[259,261,267,269,275,277],{"type":24,"value":260},"在迁移项目之前，首先要分析清楚bnb量化逻辑从模型到算子的具体流程以及部署过程中的细节。关于bnb的详细构建步骤可以参考huggingface官网的文档 ",{"type":18,"tag":68,"props":262,"children":265},{"href":263,"rel":264},"https://huggingface.co/docs/bitsandbytes/main/en/installation",[72],[266],{"type":24,"value":263},{"type":24,"value":268}," 额外需要注意，明确项目所需依赖版本与MindSpore NLP中的要求是否冲突，并摒弃掉所有跟PyTorch有关的依赖，实测项目开发时bnb的最新版本bnb 0.43.2.dev0 （现在源码编译的最新版本是0.44.2.dev0）需要python >= 3.10，否则安装依赖会失败。 把核心的构建流程迁移到MindSpore BNB的构建脚本中来，几乎是一样的，只是安装的依赖会不同，见 ",{"type":18,"tag":68,"props":270,"children":273},{"href":271,"rel":272},"https://github.com/hypertseng/mindbnb/blob/main/scripts/build.sh",[72],[274],{"type":24,"value":271},{"type":24,"value":276}," 与 ",{"type":18,"tag":68,"props":278,"children":281},{"href":279,"rel":280},"https://github.com/hypertseng/mindbnb/blob/main/requirements-dev.txt",[72],[282],{"type":24,"value":279},{"type":18,"tag":26,"props":284,"children":285},{},[286],{"type":18,"tag":47,"props":287,"children":288},{},[289],{"type":24,"value":290},"项目实现思路",{"type":18,"tag":26,"props":292,"children":293},{},[294],{"type":24,"value":295},"安装部署好bnb之后，为了探究量化的过程，需要从transformers的from_pretrained接口出发，可以打断点调试，观察bnb是如何在加载预训练模型的过程中完成对权重的量化的。主要分为以下三部分：",{"type":18,"tag":26,"props":297,"children":298},{},[299],{"type":24,"value":300},"1、加载模型，在replace_with_bnb_linear函数中将模型中的linear层替换为bnb中实现的新的低比特linear层(Linear8bitLt)，为了使模型推理结果更稳定，这里会保持lm_head层为高精度。下图是具体的函数调用位置，transformers将量化策略、layer替换方法等集成到了一个单独的quantizer模块中，bnb为可用的量化方法中的一种。",{"type":18,"tag":26,"props":302,"children":303},{},[304],{"type":18,"tag":39,"props":305,"children":307},{"alt":7,"src":306},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/6204d800da014a7eb660dd01335ad120.png",[],{"type":18,"tag":26,"props":309,"children":310},{},[311],{"type":24,"value":312},"图 3 replace_with_bnb_linear函数",{"type":18,"tag":26,"props":314,"children":315},{},[316],{"type":18,"tag":39,"props":317,"children":319},{"alt":7,"src":318},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/96e85f59fcda44e1bac43de4c3f69cf1.png",[],{"type":18,"tag":26,"props":321,"children":322},{},[323],{"type":24,"value":324},"图 4 _replace_with_bnb_linear函数",{"type":18,"tag":26,"props":326,"children":327},{},[328],{"type":24,"value":329},"2、在加载预训练权重时，对高精度权重进行量化，将量化后INT8的权重给到module，bnb新实现了一个Int8Params类，重载了to方法，于是在to(device)时，to方法中会调用.cuda()函数，并在里面实现量化权重的计算。",{"type":18,"tag":26,"props":331,"children":332},{},[333],{"type":18,"tag":39,"props":334,"children":336},{"alt":7,"src":335},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/f0800d113aa946feae036a726d0ba4cb.png",[],{"type":18,"tag":26,"props":338,"children":339},{},[340],{"type":24,"value":341},"图 5 cuda函数中实现量化计算",{"type":18,"tag":26,"props":343,"children":344},{},[345],{"type":24,"value":346},"3、推理时使用量化算子进行高效的低精度计算，如下：",{"type":18,"tag":26,"props":348,"children":349},{},[350],{"type":18,"tag":39,"props":351,"children":353},{"alt":7,"src":352},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/b783a855c22f42b290b4d1622094ad66.png",[],{"type":18,"tag":26,"props":355,"children":356},{},[357],{"type":24,"value":358},"图 6 double_quant量化",{"type":18,"tag":26,"props":360,"children":361},{},[362],{"type":24,"value":363},"这里以cdouble_rowcol_quant为例，从Linear8bitLt layer到这个算子的函数调用栈为：Linear8bitLt=>bnb.matmul=>MatMul8bitLt=>double_quant=>lib.cdouble_rowcol_quant，这是一种由上至下的执行路径，这样的执行路径还有多条，但在开发过程中可以先解决其中一条路径，总结出方法经验，再采用DFS式的开发逐个击破。",{"type":18,"tag":26,"props":365,"children":366},{},[367],{"type":24,"value":368},"以上就是项目迁移和开发的基本思路。在实际开发过程中还有许多需要处理的问题。最重要最关键的是算子的接入问题。",{"type":18,"tag":26,"props":370,"children":371},{},[372],{"type":24,"value":373},"由于bnb中本身包含了大量量化过程中会使用的高度优化的CUDA核函数，并把它们封装成算子，又由于算子被进一步封装为python文件调用的接口，写在pythonInterface.cpp里，bnb项目构建时会将众多算子根据接口文件中的实现打包，根据当前是否使用GPU和系统安装的CUDA工具链的版本，预编译生成一个动态链接库文件。原本bnb是通过ctypes库来加载dll动态链接库，便可直接访问pythonInterface.cpp中定义的c++函数。问题就在于如何尽可能复用底层CUDA算子代码，并在Python侧提供一种高效的算子调用方式。",{"type":18,"tag":26,"props":375,"children":376},{},[377],{"type":24,"value":378},"接上CUDA算子之后，再逐层向上修改因为框架差异而导致的不兼容代码，比如大量关于device的操作、关于GPU设备信息的获取操作、因算子调用方式不同导致参数传递不匹配等种种问题，修改多个层次的代码，从算子到量化算法，到低精度乘法layer，再到低精度的Linear layer，最后再定义好合适的layer替换方法，就基本可以实现完整的LLM量化过程了。",{"type":18,"tag":26,"props":380,"children":381},{},[382],{"type":18,"tag":47,"props":383,"children":384},{},[385],{"type":24,"value":386},"最终方案",{"type":18,"tag":26,"props":388,"children":389},{},[390],{"type":24,"value":391},"在MindSpore BNB的实现中，对于大部分算子，采取能用则用的原则，尽量不更改这部分复杂的内容。在算子接入上，选择走Custom AOT类型算子的路线，在Custom模版中传入 .so 文件的路径，由Custom接口去加载动态链接库中的c++函数，这样便能像使用一般python函数一样调用相应的量化算子了。",{"type":18,"tag":26,"props":393,"children":394},{},[395],{"type":24,"value":396},"开发过程中遇到的兼容性问题有很多，在此列出有代表性的几种，并给出MindSpore BNB的实现方案。",{"type":18,"tag":26,"props":398,"children":399},{},[400],{"type":18,"tag":47,"props":401,"children":402},{},[403],{"type":24,"value":404},"01",{"type":18,"tag":26,"props":406,"children":407},{},[408],{"type":18,"tag":47,"props":409,"children":410},{},[411],{"type":24,"value":412},"MindSpore Tensor如何正确地传递给CUDA算子？",{"type":18,"tag":26,"props":414,"children":415},{},[416],{"type":24,"value":417},"以get_colrow_absmax函数为例。函数中会调用算子，如下：",{"type":18,"tag":26,"props":419,"children":420},{},[421],{"type":18,"tag":39,"props":422,"children":424},{"alt":7,"src":423},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/7d6407f2ae014198936c076763f82a3a.png",[],{"type":18,"tag":26,"props":426,"children":427},{},[428],{"type":24,"value":429},"图 7 get_colrow_absmax函数",{"type":18,"tag":26,"props":431,"children":432},{},[433],{"type":24,"value":434},"其中lib是通过预先编译CUDA算子得到的.so文件进行加载得到的，ptrA, ptrRowStats, ptrColStats, ptrNnzrows都是通过get_ptr函数获取的指针。而get_ptr函数需要从Tensor中获取数据的指针，如下所示：",{"type":18,"tag":26,"props":436,"children":437},{},[438],{"type":18,"tag":39,"props":439,"children":441},{"alt":7,"src":440},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/dd3491e9a20048e29da57d061617f002.png",[],{"type":18,"tag":26,"props":443,"children":444},{},[445],{"type":24,"value":446},"图 8 get_ptr函数",{"type":18,"tag":26,"props":448,"children":449},{},[450],{"type":24,"value":451},"图 7中调用的cget_col_row_stats是一个cpp到python的接口函数，主要用来给CUDA函数传递参数，转换数据类型，在其中会调用getColRowStats算子，算子中会调用相应的核函数。",{"type":18,"tag":26,"props":453,"children":454},{},[455],{"type":24,"value":456},"最大的问题是MindSpore的Tensor根本不支持获取数据的指针，没有提供data_ptr()这样的函数接口。解决思路如下：",{"type":18,"tag":26,"props":458,"children":459},{},[460],{"type":18,"tag":47,"props":461,"children":462},{},[463],{"type":24,"value":464},"1、在python中实现",{"type":18,"tag":26,"props":466,"children":467},{},[468],{"type":24,"value":469},"a)获取mindspore Tensor data的指针？（暂时无法实现）",{"type":18,"tag":26,"props":471,"children":472},{},[473],{"type":24,"value":474},"b)通过numpy实现，先通过asnumpy()获取numpy array，再获取numpy数组的指针，最终将numpy数组的指针传入算子，计算完之后再进行一遍相反的过程得到mindspore Tensor 。（影响速度和显存）",{"type":18,"tag":26,"props":476,"children":477},{},[478],{"type":18,"tag":47,"props":479,"children":480},{},[481],{"type":24,"value":482},"2、在c++中实现",{"type":18,"tag":26,"props":484,"children":485},{},[486],{"type":24,"value":487},"Mindspore Custom自定义算子方案是将参数都识别为void *放在数组中，在算子内部再根据具体kernel输入数据类型用static_cast进行强制类型转换。对于这个问题，可以直接在python中传递mindspore Tensor给Custom算子，在Custom算子中再进行类型转换。",{"type":18,"tag":26,"props":489,"children":490},{},[491],{"type":24,"value":492},"按这种方法，在pythonInterface.cpp中从算子接口处（如cget_col_row_stats函数）往下以MindSpore自定义算子的方式修改，一直改到ops.cu（定义CUDA算子的文件），再修改程序加载动态链接库lib的方式。（工作量巨大，细节多，修改的过程中容易引起其他不可预料的问题）",{"type":18,"tag":26,"props":494,"children":495},{},[496],{"type":18,"tag":39,"props":497,"children":499},{"alt":7,"src":498},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/894c63f10b9c4084a4f8094478967b81.png",[],{"type":18,"tag":26,"props":501,"children":502},{},[503],{"type":24,"value":504},"图 9项目中与算子及其接口有关的文件",{"type":18,"tag":26,"props":506,"children":507},{},[508],{"type":24,"value":509},"将get_ptr函数做如下更改，尝试第1种解决方案时遇到的问题：",{"type":18,"tag":26,"props":511,"children":512},{},[513],{"type":18,"tag":39,"props":514,"children":516},{"alt":7,"src":515},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/32613a54bc454aaeaa60f529ea666807.png",[],{"type":18,"tag":26,"props":518,"children":519},{},[520],{"type":24,"value":521},"图 10 修改get_ptr函数",{"type":18,"tag":26,"props":523,"children":524},{},[525],{"type":24,"value":526},"发现算子执行时报访存错：",{"type":18,"tag":26,"props":528,"children":529},{},[530],{"type":18,"tag":39,"props":531,"children":533},{"alt":7,"src":532},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/fa39c2c53c934c69abd0ebf45d674bfe.png",[],{"type":18,"tag":26,"props":535,"children":536},{},[537],{"type":24,"value":538},"图 11 访存错误",{"type":18,"tag":26,"props":540,"children":541},{},[542],{"type":24,"value":543},"又尝试在函数外将Tensor转为numpy数组，在get_ptr内只完成numpy数组到c指针的转换，没有报错，但执行算子后数组的值没有改变。",{"type":18,"tag":26,"props":545,"children":546},{},[547],{"type":24,"value":548},"原因可能是numpy获取的指针为另一个拷贝数组的指针，导致原数组的值没有改变，在 numpy 数组指针 ptr 被传递到 CUDA 操作后，numpy数组已经被销毁或释放，就可能导致非法内存访问。",{"type":18,"tag":26,"props":550,"children":551},{},[552],{"type":24,"value":553},"最终的解决方案还是使用Custom自定义算子，原理是因为Custom算子定义模版中会将Tensor以 void* 来传递，于是在原先的ops算子外再进行一层抽象，遵循Custom自定义算子的实现规范，下图是针对getColRowStats算子实现的custom_cget_col_row_stats自定义算子的示例。",{"type":18,"tag":26,"props":555,"children":556},{},[557],{"type":18,"tag":39,"props":558,"children":560},{"alt":7,"src":559},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/4c37337ecb834684b48c2586abf2e460.png",[],{"type":18,"tag":26,"props":562,"children":563},{},[564],{"type":24,"value":565},"图 12 Custom自定义算子实现示例",{"type":18,"tag":26,"props":567,"children":568},{},[569],{"type":24,"value":570},"而原本的cget_col_row_stats是这样执行的：",{"type":18,"tag":26,"props":572,"children":573},{},[574],{"type":18,"tag":39,"props":575,"children":577},{"alt":7,"src":576},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/fc1885d743614d54b5b1bc98930b941c.png",[],{"type":18,"tag":26,"props":579,"children":580},{},[581],{"type":24,"value":582},"图 13 bnb的算子接口函数示例",{"type":18,"tag":26,"props":584,"children":585},{},[586],{"type":24,"value":587},"在自定义算子中能够成功获取到输入的Tensor，并全都用void *来传参，在函数的内部再来处理数据类型的问题。解决了之前的问题，将cget_col_row_stats类接口统一重新实现，用类custom_cget_col_row_stats接口来代替，并仍然可以通过加载动态链接库的方式访问每一个自定义算子。",{"type":18,"tag":26,"props":589,"children":590},{},[591],{"type":18,"tag":47,"props":592,"children":593},{},[594],{"type":24,"value":595},"02",{"type":18,"tag":26,"props":597,"children":598},{},[599],{"type":18,"tag":47,"props":600,"children":601},{},[602],{"type":24,"value":603},"cublas Context问题",{"type":18,"tag":26,"props":605,"children":606},{},[607],{"type":24,"value":608},"igemmlt函数中存在通过device获取相应的Context对象，而Context是定义在CUDA算子头文件ops.cuh中的一个类，主要目的是在构造函数中创建cublas handle，但是MindSpore中设备是由框架调度的，并没有device这个概念。",{"type":18,"tag":26,"props":610,"children":611},{},[612],{"type":18,"tag":39,"props":613,"children":615},{"alt":7,"src":614},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/92dee023a49a471eb011132368ee9ef7.png",[],{"type":18,"tag":26,"props":617,"children":618},{},[619],{"type":24,"value":620},"图 14 igemmlt函数获取cublas Context",{"type":18,"tag":26,"props":622,"children":623},{},[624],{"type":24,"value":625},"解决方案是在算子接口层处理handle，直接在需要时在内存中创建，所有需要handle的函数共享这个Context对象。",{"type":18,"tag":26,"props":627,"children":628},{},[629],{"type":18,"tag":47,"props":630,"children":631},{},[632],{"type":24,"value":633},"03",{"type":18,"tag":26,"props":635,"children":636},{},[637],{"type":18,"tag":47,"props":638,"children":639},{},[640],{"type":24,"value":641},"性能问题",{"type":18,"tag":26,"props":643,"children":644},{},[645],{"type":24,"value":646},"在基本完成所有编码工作，确保精度对齐PyTorch实现之后，发现性能上存在着较大差异。下图是使用MindSpore Insight做profiling分析的算子与核函数耗时饼图。",{"type":18,"tag":26,"props":648,"children":649},{},[650],{"type":18,"tag":39,"props":651,"children":653},{"alt":7,"src":652},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/ecca2f5ec1a845779954bb135665557f.png",[],{"type":18,"tag":26,"props":655,"children":656},{},[657],{"type":24,"value":658},"图 15 算子耗时分析",{"type":18,"tag":26,"props":660,"children":661},{},[662],{"type":18,"tag":39,"props":663,"children":665},{"alt":7,"src":664},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/f581dc237ba0425aa2c8afab016966c4.png",[],{"type":18,"tag":26,"props":667,"children":668},{},[669],{"type":24,"value":670},"图 16 kernel耗时分析",{"type":18,"tag":26,"props":672,"children":673},{},[674],{"type":24,"value":675},"用tracing查看完整的计算过程，发现算子执行的间隔很长，最终发现原因主要有3点：",{"type":18,"tag":26,"props":677,"children":678},{},[679],{"type":24,"value":680},"1、开发时将torch.empty()替换为了np.empty()，造成执行效率低下，后替换为高效的实现如下图：",{"type":18,"tag":26,"props":682,"children":683},{},[684],{"type":18,"tag":39,"props":685,"children":687},{"alt":7,"src":686},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/35a193ad5233418396b33bb927f0c150.png",[],{"type":18,"tag":26,"props":689,"children":690},{},[691],{"type":24,"value":692},"图 17 高效empty函数实现",{"type":18,"tag":26,"props":694,"children":695},{},[696],{"type":24,"value":697},"2、为了获取GPU有关信息，多次通过subprocess运行nvidia-smi速度缓慢，遂后续执行一次就在内存中记录相关信息，降低频繁执行nvidia-smi造成的时间开销。",{"type":18,"tag":26,"props":699,"children":700},{},[701],{"type":24,"value":702},"3、频繁规律调用asnumpy导致流水线等待，经过仔细排查，发现程序中并没有直接使用asnumpy，最终发现在下图位置因为自定义算子返回的是Tensor，所以这里has_error是只含一个元素的Tensor，这里直接比较它的值，隐式调用了asnumpy。asnumpy会把值从GPU拷贝回CPU，造成流水线等待。为了解决此问题，直接给自定义算子加一个参数has_error，在自定义算子内部，将CUDA算子执行的返回值赋给has_error，使逻辑判断等号两边的数据类型一致，避免隐式调用asnumpy。",{"type":18,"tag":26,"props":704,"children":705},{},[706],{"type":18,"tag":39,"props":707,"children":709},{"alt":7,"src":708},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/12/20/d5d94729a8924155be3cdbe3e9c5ec53.png",[],{"type":18,"tag":26,"props":711,"children":712},{},[713],{"type":24,"value":714},"图 18 隐式调用asnumpy",{"type":18,"tag":26,"props":716,"children":717},{},[718],{"type":24,"value":719},"最后能达到与PyTorch实现相近的性能。",{"type":18,"tag":26,"props":721,"children":722},{},[723],{"type":18,"tag":47,"props":724,"children":725},{},[726],{"type":24,"value":727},"项目总结",{"type":18,"tag":26,"props":729,"children":730},{},[731,733,739],{"type":24,"value":732},"本项目完成了基于MindSpore的BitsAndBytes量化库的实现，工作主要包含自定义算子的编写、框架迁移、易用的量化接口实现，并在算子、layer、模型层面都进行了精度的测试比对验证。用户只需要在加载模型后，把模型传入我实现的quant_8bit()函数，即可高效实现8bit量化。源代码、构建脚本以及演示脚本等均已开源，见 ",{"type":18,"tag":68,"props":734,"children":737},{"href":735,"rel":736},"https://github.com/hypertseng/mindbnb",[72],[738],{"type":24,"value":735},{"type":24,"value":740}," 。",{"type":18,"tag":26,"props":742,"children":743},{},[744],{"type":24,"value":745},"很高兴能参加开源之夏2024，让我持续积累开源社区的开发经验，做自己感兴趣的项目，感谢昇思MindSpore与中科院软件所提供的宝贵实践平台！",{"type":18,"tag":26,"props":747,"children":748},{},[749],{"type":24,"value":750},"参考文章：",{"type":18,"tag":26,"props":752,"children":753},{},[754],{"type":24,"value":755},"[1] Dettmers T, Lewis M, Belkada Y, et al. LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale[A]. arXiv, 2022.",{"type":18,"tag":26,"props":757,"children":758},{},[759,761,768],{"type":24,"value":760},"[2] MindSpore[EB/OL].",{"type":18,"tag":68,"props":762,"children":765},{"href":763,"rel":764},"https://www.mindspore.cn/docs/zh-CN/master/api%5C_python/mindspore/mindspore.dtype.html#mindspore.dtype",[72],[766],{"type":24,"value":767},"https://www.mindspore.cn/docs/zh-CN/master/api\\_python/mindspore/mindspore.dtype.html#mindspore.dtype",{"type":24,"value":769},".",{"type":18,"tag":26,"props":771,"children":772},{},[773],{"type":18,"tag":47,"props":774,"children":775},{},[776],{"type":24,"value":777},"随访",{"type":18,"tag":26,"props":779,"children":780},{},[781],{"type":24,"value":782},"**昇思MindSpore：**请简单介绍一下自己和你的开源经历吧。",{"type":18,"tag":26,"props":784,"children":785},{},[786],{"type":24,"value":787},"**曾子瑄：**大家好，我是曾子瑄，目前在中国科学院软件研究所读硕士一年级，此前担任过重庆大学智能基座协会昇思MindSpore的负责人，我的开源工作主要贡献在MindSpore社区的MindSpore NLP套件中。",{"type":18,"tag":26,"props":789,"children":790},{},[791],{"type":24,"value":792},"**昇思MindSpore：**你是从什么渠道了解到昇思MindSpore的，为什么选择了昇思MindSpore呢？",{"type":18,"tag":26,"props":794,"children":795},{},[796],{"type":24,"value":797},"**曾子瑄：**最早是在2021年，那时我还在读大一，那时智能基座5月份在重庆大学举办了“DevRun智能基座鲲鹏昇腾高校行”，因此我对以鲲鹏昇腾为代表的中国技术生态有了了解，也是第一次了解到MindSpore框架。后来在学校的一些课程中，例如机器学习基础、深度学习与大数据技术、自然语言处理等智能基座合作课程上，再次与MindSpore相遇，并在课程作业中，参考昇思社区提供的技术文档，使用MindSpore进行了初步实践，发现MindSpore上手容易，功能强大，能方便地进行AI应用开发。",{"type":18,"tag":26,"props":799,"children":800},{},[801],{"type":24,"value":802},"后来在智能基座社团中又多次组织与MindSpore相关的活动，对开源社区有了更多的了解与更深刻的认识，发现昇思开源社区十分活跃，有许多门槛不同开源活动可以参与，比如MSG、开源实习、开源之夏以及各种比赛等，不仅能积累项目开发经验，还能获得一定奖励。于是，我后面也参加了MindSpore NLP套件的开源活动，并持续依托MindSpore NLP套件为平台，参加了MindSpore的开源实习和开源之夏活动。",{"type":18,"tag":26,"props":804,"children":805},{},[806],{"type":24,"value":807},"**昇思MindSpore：**12月14日的昇思峰会上，您被评为杰出开发者，由此可以看到您为昇思MindSpore和昇思MindSpore开源社区做出了很多贡献，能否简单介绍下您贡献的内容？",{"type":18,"tag":26,"props":809,"children":810},{},[811],{"type":24,"value":812},"**曾子瑄：**我在MindSpore开源社区的工作都与MindSpore NLP套件紧密结合，也是作为MindSpore NLP SIG的核心成员先后完成了四个大模型的迁移工作，一个Falcon大模型微调案例开发，GPU后端的Flash Attention算子在MindSpore NLP的接入与调优，实现了两倍以上的推理速度提升。今年开源之夏做了bitsandbytes量化库迁移到MindSpore NLP的工作，降低了模型推理对显存的需求，提升了计算性能，增强了模型部署在端侧的可行性。",{"type":18,"tag":26,"props":814,"children":815},{},[816],{"type":24,"value":817},"**昇思MindSpore：**请问您的贡献在工作和学习中有什么结合，有什么印象比较深刻的事情？",{"type":18,"tag":26,"props":819,"children":820},{},[821],{"type":24,"value":822},"**曾子瑄：**我把开源实习的工作融合进了我的本科毕业设计里面，综合成了一个模型迁移、模型应用，再到优化的过程，毕业设计成绩优秀，在开源之夏完成的量化库迁移的工作也与我现在的研究方向有关，总体上说开源社区的经验与我日常的学习工作是相互促进的关系，我能从实践中锻炼工程能力，捕捉到产业需求风向，从而影响到我对研究方向的把握，我在昇思社区的开源活动中学习到的技能对于以后工作或者科研来说也是十分有帮助的。",{"type":18,"tag":26,"props":824,"children":825},{},[826],{"type":24,"value":827},"比较印象深刻的是在开源之夏的项目开发过程中，因为需要迁移的bitsandbytes库本身软件耦合程度高，关于算子接入的问题刚开始一直没找到合适的解决方案，后来摸清楚了项目本身的构建和执行逻辑，再跟MindSpore布道师和负责MindSpore自定义算子接口的工程师一起讨论，得出了基本解决方案，随后顺利地完成了程序中由上至下一条核心执行线路的移植，奠定了项目完成的基础。",{"type":18,"tag":26,"props":829,"children":830},{},[831],{"type":24,"value":832},"**昇思MindSpore：**在参与昇思MindSpore开源社区中，有什么比较深刻的感受/体会/收获吗？",{"type":18,"tag":26,"props":834,"children":835},{},[836],{"type":24,"value":837},"**曾子瑄：**整体上收获还是蛮大的，归纳一下，大概是三点：一是个人实践能力的提升，特别是对于大型工程的分析和debug能力，还有对于具体技术问题的解决能力；二是开拓视野，参与昇思开源社区学习到了众多人工智能领域的最新前沿知识，了解了最新的产业动态，知道了现在大家都在做什么，指导了自己以后应该怎么做；三是机会和资源，我在MindSpore社区中得到了许多锻炼机会，昇思开源实习与开源之夏活动都是很好的实践平台，包括受邀参加峰会，这对我个人的成长，不仅仅是在技术层面，是有好处的。另外开源社区的开发经历以及获得的一些荣誉，在找实习、和保研过程中也起到了很大的帮助。",{"type":18,"tag":26,"props":839,"children":840},{},[841],{"type":24,"value":842},"**昇思MindSpore：**对于昇思MindSpore开源社区，有没有什么比较推荐的地方？",{"type":18,"tag":26,"props":844,"children":845},{},[846],{"type":24,"value":847},"**曾子瑄：**在这里给MindSpore NLP打个广告，MindSpore NLP是一个优秀的NLP开源开发套件，模型库丰富，开发接口与huggingface transformers对标，上手快，易用性好，并且提供了丰富的应用案例与实践教程，欢迎对nlp技术与大模型技术的开发者们加入MindSpore NLP SIG，随着SIG一起成长，逐步成为SIG的领衔成员，NLP领域的出色工程师。另外，昇思开源社区开设了昇思MindSpore技术公开课，内容丰富新颖且全面，是学习大模型的一个良好课程，并包含了许多使用MindSpore开发大模型的技术案例，加上社区提供的开源活动和比赛，实现以练促学，成长速度遥遥领先。",{"type":18,"tag":26,"props":849,"children":850},{},[851],{"type":24,"value":852},"**昇思MindSpore：**作为过来人，有没有什么话想对过去的自己/学弟学妹/刚加入昇思MindSpore的开发者说呢？",{"type":18,"tag":26,"props":854,"children":855},{},[856],{"type":24,"value":857},"**曾子瑄：**昇思开源社区是一个很好的平台，汇聚了对AI感兴趣的开发者们，大家在一起交流技术问题，一起打比赛做项目，产生良好的沟通与交流，在社区中可以获取到你想要的资源，也有提供免费的算力。我觉得对于年轻的朋友们，特别是对相关技术领域积累还比较少的同学们，可以加入到昇思开源社区，大胆尝试，边学边实践，可以从一些门槛比较低的比赛或者活动入手，如果熟悉了MindSpore的基本开发，可以尝试申请开源实习，选择自己感兴趣的任务进行实践。有了独立完成某个项目子模块的开发能力之后，可以申请开源之夏项目。这是一条很高效的成长路径，相信这是对个人的成长很有帮助的，在这个过程中会结识志同道合的朋友，在技术圈积累人脉，获得很多除物质之外的更珍贵的东西，这是我的成长路径，与大家共勉。希望大家积极主动、勇敢追求，昇思社区永远欢迎你们。",{"title":7,"searchDepth":859,"depth":859,"links":860},4,[],"markdown","content:technology-blogs:zh:3539.md","content","technology-blogs/zh/3539.md","technology-blogs/zh/3539","md",1776506130621]