[{"data":1,"prerenderedAt":886},["ShallowReactive",2],{"content-query-twdT02hOgg":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":880,"_id":881,"_source":882,"_file":883,"_stem":884,"_extension":885},"/technology-blogs/zh/3606","zh",false,"","什么是大模型解码策略？基于MindSpore NLP的Llama3分布式推理Decoding策略实践","作者：鲍迪       来源：昇思学习打卡营第五期·NLP特辑","2025-02-12","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2025/02/14/ebd8321b0ba24f12ac4641b486a25fe7.png","technology-blogs","开发者分享",{"type":15,"children":16,"toc":869},"root",[17,25,35,40,48,56,61,69,74,82,87,92,97,102,107,115,120,125,130,135,143,148,159,164,176,181,189,194,201,206,216,224,236,241,248,258,263,273,281,286,291,296,301,309,316,327,331,339,347,352,360,368,373,381,389,397,401,413,418,463,468,476,481,486,494,502,507,515,520,528,532,537,545,553,564,569,574,579,587,592,597,602,607,612,620,630,638,648,656,664,668,673,681,686,691,699,710,718,723,728,736,740,745,753,761,767,772,777,785,790,795,800,805,810,815,820,825,833,837,841,849,857],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"什么是大模型解码策略基于mindspore-nlp的llama3分布式推理decoding策略实践",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":28},"p",{},[29],{"type":18,"tag":30,"props":31,"children":32},"strong",{},[33],{"type":24,"value":34},"作者：鲍迪 来源：昇思学习打卡营第五期·NLP特辑",{"type":18,"tag":26,"props":36,"children":37},{},[38],{"type":24,"value":39},"《昇思学习打卡营第五期·NLP特辑》的直播和打卡已全部完成。由打卡营优秀学员们输出的学习笔记同样值得我们研读和学习。本期技术文章由打卡营学员鲍迪输出并投稿。如果您也在本期打卡营中获益良多，欢迎私聊我们投稿。",{"type":18,"tag":26,"props":41,"children":42},{},[43],{"type":18,"tag":30,"props":44,"children":45},{},[46],{"type":24,"value":47},"大模型解码策略",{"type":18,"tag":26,"props":49,"children":50},{},[51],{"type":18,"tag":52,"props":53,"children":55},"img",{"alt":7,"src":54},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2025/02/14/1e8e752f3d9840aaaf77a9bb206fe6e9.png",[],{"type":18,"tag":26,"props":57,"children":58},{},[59],{"type":24,"value":60},"在大模型（如GPT、BERT等）中，分词器（Tokenizer）将输入文本转换为tokens后，模型内部会通过一系列复杂的计算步骤处理这些tokens，最终生成输出。要生成文本，通常是通过一次生成一个token来自回归方式完成的。",{"type":18,"tag":26,"props":62,"children":63},{},[64],{"type":18,"tag":30,"props":65,"children":66},{},[67],{"type":24,"value":68},"哪个embedding用于预测下一个token？",{"type":18,"tag":26,"props":70,"children":71},{},[72],{"type":24,"value":73},"当一段长句子作为输入被编码后，每个token都会转换成对应的embedding向量。在预测下一个token时，并不是单独使用某个token的embedding，而是考虑整个输入序列的上下文信息。基于Transformer架构的模型会通过自注意力机制（self-attention mechanism）来计算输入序列中所有token之间的关系，从而为每个位置生成一个综合了全局信息的表示。这个过程允许模型在预测下一个token时，考虑到之前的所有tokens，而不仅仅是直接前一个token的embedding。",{"type":18,"tag":26,"props":75,"children":76},{},[77],{"type":18,"tag":30,"props":78,"children":79},{},[80],{"type":24,"value":81},"预测到下一个token的概率后，LLM模型是怎么生成完整回答的？",{"type":18,"tag":26,"props":83,"children":84},{},[85],{"type":24,"value":86},"一旦模型预测出下一个token的概率分布，有几种策略可以用来选择具体的token以形成最终的回答：",{"type":18,"tag":26,"props":88,"children":89},{},[90],{"type":24,"value":91},"**1. 贪婪搜索：**选择概率最高的token作为下一个token。",{"type":18,"tag":26,"props":93,"children":94},{},[95],{"type":24,"value":96},"**2. 束搜索（Beam Search）：**不是只跟踪最有可能的一个序列，而是同时跟踪多个候选序列，然后从中选择最优解。",{"type":18,"tag":26,"props":98,"children":99},{},[100],{"type":24,"value":101},"**3. 采样方法：**如温度调整后的softmax采样、核采样（nucleus sampling）等，这些方法允许从概率分布中随机抽取下一个token，引入一定的随机性以增加多样性。",{"type":18,"tag":26,"props":103,"children":104},{},[105],{"type":24,"value":106},"通过重复上述过程，每次都将新生成的token添加到当前序列中，并将其再次馈入模型以预测下一个token，直到达到预定长度或遇到结束标记为止。",{"type":18,"tag":26,"props":108,"children":109},{},[110],{"type":18,"tag":30,"props":111,"children":112},{},[113],{"type":24,"value":114},"为什么相同的输入会有不同的回答，这种随机性是如何实现的？",{"type":18,"tag":26,"props":116,"children":117},{},[118],{"type":24,"value":119},"相同的输入能产生不同回答的原因主要在于模型在生成过程中采用的策略和算法：",{"type":18,"tag":26,"props":121,"children":122},{},[123],{"type":24,"value":124},"• **随机性：**即使对于相同的输入，如果使用像核采样这样的策略，模型也会根据给定的概率分布随机选择下一个token，而不是总是选择概率最高的token。这增加了输出的多样性和创造性。",{"type":18,"tag":26,"props":126,"children":127},{},[128],{"type":24,"value":129},"• **温度参数：**在softmax函数中使用的“温度”参数会影响概率分布的平滑度。较低的温度值会使模型更倾向于选择高概率的词，而较高的温度值则使分布更加均匀，从而增加随机性。",{"type":18,"tag":26,"props":131,"children":132},{},[133],{"type":24,"value":134},"• **初始状态的不同：**如果模型包括一些形式的随机初始化（例如，在对话系统中的用户特定的上下文），即使是相同的输入也可能因为初始状态的不同而导致不同的输出。",{"type":18,"tag":26,"props":136,"children":137},{},[138],{"type":18,"tag":30,"props":139,"children":140},{},[141],{"type":24,"value":142},"实践代码",{"type":18,"tag":26,"props":144,"children":145},{},[146],{"type":24,"value":147},"本期实践代码仓：",{"type":18,"tag":26,"props":149,"children":150},{},[151],{"type":18,"tag":152,"props":153,"children":157},"a",{"href":154,"rel":155},"https://github.com/mindspore-lab/mindnlp/tree/master/llm/inference/llama3",[156],"nofollow",[158],{"type":24,"value":154},{"type":18,"tag":26,"props":160,"children":161},{},[162],{"type":24,"value":163},"本期实践借助分布式并行进行llama3推理的代码，来讲解解码策略的实现和调整。解码策略在现行的大模型代码中进行调整是非常方便的，无论是哪篇代码都只需要在model.generate函数中调整代码即可。",{"type":18,"tag":26,"props":165,"children":166},{},[167,169],{"type":24,"value":168},"在model.generate函数中通过入参的调整就可以实现多种解码策略的调整和切换，详细可以参考这篇博文（",{"type":18,"tag":152,"props":170,"children":173},{"href":171,"rel":172},"https://blog.csdn.net/qq%5C_16555103/article/details/136805147%EF%BC%89",[156],[174],{"type":24,"value":175},"https://blog.csdn.net/qq\\_16555103/article/details/136805147）",{"type":18,"tag":26,"props":177,"children":178},{},[179],{"type":24,"value":180},"那么我们暂且卖个关子，把今天要介绍的五种解码策略放到后面去讲，先来过一过分布式并行的代码。",{"type":18,"tag":26,"props":182,"children":183},{},[184],{"type":18,"tag":30,"props":185,"children":186},{},[187],{"type":24,"value":188},"MindSpore分布式并行",{"type":18,"tag":26,"props":190,"children":191},{},[192],{"type":24,"value":193},"目前GPU、Ascend和CPU分别支持多种启动方式。主要有msrun、动态组网、mpirun和rank table四种方式：",{"type":18,"tag":26,"props":195,"children":196},{},[197],{"type":18,"tag":52,"props":198,"children":200},{"alt":7,"src":199},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2025/02/14/258da3bd80984eb6aeb039b541ecda2e.png",[],{"type":18,"tag":26,"props":202,"children":203},{},[204],{"type":24,"value":205},"分布式并行启动方式-MindSpore master文档",{"type":18,"tag":26,"props":207,"children":208},{},[209],{"type":18,"tag":152,"props":210,"children":213},{"href":211,"rel":212},"https://www.mindspore.cn/docs/zh-CN/r2.4.1/model%5C_train/parallel/startup%5C_method.html",[156],[214],{"type":24,"value":215},"https://www.mindspore.cn/docs/zh-CN/r2.4.1/model\\_train/parallel/startup\\_method.html",{"type":18,"tag":26,"props":217,"children":218},{},[219],{"type":18,"tag":30,"props":220,"children":221},{},[222],{"type":24,"value":223},"msrun",{"type":18,"tag":26,"props":225,"children":226},{},[227,229],{"type":24,"value":228},"msrun是动态组网（",{"type":18,"tag":152,"props":230,"children":233},{"href":231,"rel":232},"https://www.mindspore.cn/docs/zh-CN/r2.4.1/model%5C_train/parallel/dynamic%5C_cluster.html%EF%BC%89%E5%90%AF%E5%8A%A8%E6%96%B9%E5%BC%8F%E7%9A%84%E5%B0%81%E8%A3%85%EF%BC%8C%E7%94%A8%E6%88%B7%E5%8F%AF%E4%BD%BF%E7%94%A8msrun%E4%BB%A5%E5%8D%95%E4%B8%AA%E5%91%BD%E4%BB%A4%E8%A1%8C%E6%8C%87%E4%BB%A4%E7%9A%84%E6%96%B9%E5%BC%8F%E5%9C%A8%E5%90%84%E8%8A%82%E7%82%B9%E6%8B%89%E8%B5%B7%E5%A4%9A%E8%BF%9B%E7%A8%8B%E5%88%86%E5%B8%83%E5%BC%8F%E4%BB%BB%E5%8A%A1%EF%BC%8C%E5%B9%B6%E4%B8%94%E6%97%A0%E9%9C%80%E6%89%8B%E5%8A%A8%E8%AE%BE%E7%BD%AE%E5%8A%A8%E6%80%81%E7%BB%84%E7%BD%91%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%EF%BC%88https://www.mindspore.cn/docs/zh-CN/r2.4.1/model%5C_train/parallel/dynamic%5C_cluster.html%EF%BC%89%E3%80%82msrun%E5%90%8C%E6%97%B6%E6%94%AF%E6%8C%81Ascend%EF%BC%8CGPU%E5%92%8CCPU%E5%90%8E%E7%AB%AF%E3%80%82%E4%B8%8E%E5%8A%A8%E6%80%81%E7%BB%84%E7%BD%91%E5%90%AF%E5%8A%A8%E6%96%B9%E5%BC%8F%E4%B8%80%E6%A0%B7%EF%BC%8Cmsrun%E6%97%A0%E9%9C%80%E4%BE%9D%E8%B5%96%E7%AC%AC%E4%B8%89%E6%96%B9%E5%BA%93%E4%BB%A5%E5%8F%8A%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E3%80%82",[156],[234],{"type":24,"value":235},"https://www.mindspore.cn/docs/zh-CN/r2.4.1/model\\_train/parallel/dynamic\\_cluster.html）启动方式的封装，用户可使用msrun以单个命令行指令的方式在各节点拉起多进程分布式任务，并且无需手动设置动态组网环境变量（https://www.mindspore.cn/docs/zh-CN/r2.4.1/model\\_train/parallel/dynamic\\_cluster.html）。msrun同时支持Ascend，GPU和CPU后端。与动态组网启动方式一样，msrun无需依赖第三方库以及配置文件。",{"type":18,"tag":26,"props":237,"children":238},{},[239],{"type":24,"value":240},"msrun在用户安装MindSpore后即可使用，可使用指令msrun --help查看支持参数。 msrun支持图模式以及PyNative模式。",{"type":18,"tag":242,"props":243,"children":245},"h3",{"id":244},"msrun启动",[246],{"type":24,"value":247},"msrun启动：",{"type":18,"tag":26,"props":249,"children":250},{},[251],{"type":18,"tag":152,"props":252,"children":255},{"href":253,"rel":254},"https://www.mindspore.cn/docs/zh-CN/r2.4.1/model%5C_train/parallel/msrun%5C_launcher.html",[156],[256],{"type":24,"value":257},"https://www.mindspore.cn/docs/zh-CN/r2.4.1/model\\_train/parallel/msrun\\_launcher.html",{"type":18,"tag":26,"props":259,"children":260},{},[261],{"type":24,"value":262},"使用方式",{"type":18,"tag":264,"props":265,"children":267},"pre",{"code":266},"# msrun is a MindSpore defined launcher for multi-process parallel execution, which can get best performance, you can use it by the command below:\nmsrun --worker_num=2 --local_worker_num=2 --master_port=8118 --join=True run_llama3_distributed.py\n\n# if you use Ascend NPU with Kunpeng CPU, you should bind-core to get better performance\nmsrun --worker_num=2 --local_worker_num=2 --master_port=8118 --join=True --bind_core=True run_llama3_distributed.py\n",[268],{"type":18,"tag":269,"props":270,"children":271},"code",{"__ignoreMap":7},[272],{"type":24,"value":266},{"type":18,"tag":26,"props":274,"children":275},{},[276],{"type":18,"tag":30,"props":277,"children":278},{},[279],{"type":24,"value":280},"mpirun",{"type":18,"tag":26,"props":282,"children":283},{},[284],{"type":24,"value":285},"OpenMPI（Open Message Passing Interface）是一个开源的、高性能的消息传递编程库，用于并行计算和分布式内存计算，它通过在不同进程之间传递消息来实现并行计算，适用于许多科学计算和机器学习任务。使用OpenMPI进行并行训练是一种通用的在计算集群或多核机器上利用并行计算资源来加速训练过程的方法。OpenMPI在分布式训练的场景中，起到在Host侧同步数据以及进程间组网的功能。",{"type":18,"tag":26,"props":287,"children":288},{},[289],{"type":24,"value":290},"与rank table启动不同的是，在Ascend硬件平台上通过OpenMPI的mpirun命令运行脚本，用户不需要配置RANK_TABLE_FILE环境变量。",{"type":18,"tag":26,"props":292,"children":293},{},[294],{"type":24,"value":295},"相关命令：",{"type":18,"tag":26,"props":297,"children":298},{},[299],{"type":24,"value":300},"mpirun启动命令如下，其中DEVICE_NUM是所在机器的GPU数量：",{"type":18,"tag":264,"props":302,"children":304},{"code":303},"mpirun -n DEVICE_NUM python net.py\n",[305],{"type":18,"tag":269,"props":306,"children":307},{"__ignoreMap":7},[308],{"type":24,"value":303},{"type":18,"tag":26,"props":310,"children":311},{},[312],{"type":18,"tag":52,"props":313,"children":315},{"alt":7,"src":314},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2025/02/14/47649d1e5c8c44e2a559417118f7092c.png",[],{"type":18,"tag":26,"props":317,"children":318},{},[319,321],{"type":24,"value":320},"mpirun文档：",{"type":18,"tag":152,"props":322,"children":325},{"href":323,"rel":324},"https://www.open-mpi.org/doc/current/man1/mpirun.1.php",[156],[326],{"type":24,"value":323},{"type":18,"tag":26,"props":328,"children":329},{},[330],{"type":24,"value":262},{"type":18,"tag":264,"props":332,"children":334},{"code":333},"# mpirun controls several aspects of program execution in Open MPI, you can use it by the command below:\nmpirun -n 2 python run_llama3_distributed.py\n\n# if you use Ascend NPU with Kunpeng CPU, you should bind-core to get better performance:\nmpirun --bind-to numa -n 2 python run_llama3_distributed.py\n",[335],{"type":18,"tag":269,"props":336,"children":337},{"__ignoreMap":7},[338],{"type":24,"value":333},{"type":18,"tag":26,"props":340,"children":341},{},[342],{"type":18,"tag":30,"props":343,"children":344},{},[345],{"type":24,"value":346},"llama3推理",{"type":18,"tag":26,"props":348,"children":349},{},[350],{"type":24,"value":351},"单卡环境下运行run_llama3.py，使用MindSpore构建自动分词器和自动因果推理模型，搭建llama3模型的对话脚本。",{"type":18,"tag":264,"props":353,"children":355},{"code":354},"# run_llama3.py\nimport mindspore\nfrom mindnlp.transformers import AutoTokenizer, AutoModelForCausalLM\n\nmodel_id = \"LLM-Research/Meta-Llama-3-8B-Instruct\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, mirror='modelscope')\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_id,\n    ms_dtype=mindspore.float16,\n    mirror='modelscope'\n)\n\nmessages = [\n    {\"role\": \"system\", \"content\": \"You are a pirate chatbot who always responds in pirate speak!\"},\n    {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\n\ninput_ids = tokenizer.apply_chat_template(\n    messages,\n    add_generation_prompt=True,\n    return_tensors=\"ms\"\n)\n\nterminators = [\n    tokenizer.eos_token_id,\n    tokenizer.convert_tokens_to_ids(\"\u003C|eot_id|>\")\n]\n\noutputs = model.generate(\n    input_ids,\n    max_new_tokens=20,\n    eos_token_id=terminators,\n    do_sample=False,\n    # do_sample=True,\n    # temperature=0.6,\n    # top_p=0.9,\n)\nresponse = outputs[0][input_ids.shape[-1]:]\nprint(tokenizer.decode(response, skip_special_tokens=True))\n",[356],{"type":18,"tag":269,"props":357,"children":358},{"__ignoreMap":7},[359],{"type":24,"value":354},{"type":18,"tag":26,"props":361,"children":362},{},[363],{"type":18,"tag":30,"props":364,"children":365},{},[366],{"type":24,"value":367},"llama3并行推理",{"type":18,"tag":26,"props":369,"children":370},{},[371],{"type":24,"value":372},"分布式并行脚本，只是在上面脚本的基础上导入了mindspore.communication库，并使用 init初始化通信。在Terminal中输入即可启动脚本，进行分布式推理。",{"type":18,"tag":264,"props":374,"children":376},{"code":375},"# Terminal\nmsrun --worker_num=2 --local_worker_num=2 --master_port=8118 --join=True run_llama3_distributed.py\n",[377],{"type":18,"tag":269,"props":378,"children":379},{"__ignoreMap":7},[380],{"type":24,"value":375},{"type":18,"tag":264,"props":382,"children":384},{"code":383},"# run_llama3_distributed.py\nimport mindspore\nfrom mindspore.communication import init\nfrom mindnlp.transformers import AutoTokenizer, AutoModelForCausalLM\n\nmodel_id = \"LLM-Research/Meta-Llama-3-8B-Instruct\"\n\ninit()\ntokenizer = AutoTokenizer.from_pretrained(model_id, mirror='modelscope')\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_id,\n    ms_dtype=mindspore.float16,\n    mirror='modelscope',\n    device_map=\"auto\"\n)\n\nmessages = [\n    {\"role\": \"system\", \"content\": \"You are a pirate chatbot who always responds in pirate speak!\"},\n    {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\n\ninput_ids = tokenizer.apply_chat_template(\n    messages,\n    add_generation_prompt=True,\n    return_tensors=\"ms\"\n)\n\nterminators = [\n    tokenizer.eos_token_id,\n    tokenizer.convert_tokens_to_ids(\"\u003C|eot_id|>\")\n]\n\noutputs = model.generate(\n    input_ids,\n    max_new_tokens=100,\n    eos_token_id=terminators,\n    do_sample=True,\n    temperature=0.6,\n    top_p=0.9,\n)\nresponse = outputs[0][input_ids.shape[-1]:]\nprint(tokenizer.decode(response, skip_special_tokens=True))\n",[385],{"type":18,"tag":269,"props":386,"children":387},{"__ignoreMap":7},[388],{"type":24,"value":383},{"type":18,"tag":26,"props":390,"children":391},{},[392],{"type":18,"tag":30,"props":393,"children":394},{},[395],{"type":24,"value":396},"解码策略",{"type":18,"tag":398,"props":399,"children":400},"h2",{"id":7},[],{"type":18,"tag":242,"props":402,"children":404},{"id":403},"greedy-select-the-best-probable-token-at-a-time贪心一次选择最可能的标记",[405],{"type":18,"tag":30,"props":406,"children":407},{},[408],{"type":18,"tag":30,"props":409,"children":410},{},[411],{"type":24,"value":412},"Greedy: Select the best probable token at a time｜贪心：一次选择最可能的标记",{"type":18,"tag":26,"props":414,"children":415},{},[416],{"type":24,"value":417},"每次都选择概率最高的token作为下一个token。这种方法简单直接，但可能导致生成的文本缺乏多样性。",{"type":18,"tag":26,"props":419,"children":420},{},[421,427,429,435,436,442,443,449,450,456,457],{"type":18,"tag":269,"props":422,"children":424},{"className":423},[],[425],{"type":24,"value":426},"generation_output = model.generate(",{"type":24,"value":428},"    ",{"type":18,"tag":269,"props":430,"children":432},{"className":431},[],[433],{"type":24,"value":434},"input_ids=input_ids,",{"type":24,"value":428},{"type":18,"tag":269,"props":437,"children":439},{"className":438},[],[440],{"type":24,"value":441},"num_beams = 1,",{"type":24,"value":428},{"type":18,"tag":269,"props":444,"children":446},{"className":445},[],[447],{"type":24,"value":448},"do_sample = False,",{"type":24,"value":428},{"type":18,"tag":269,"props":451,"children":453},{"className":452},[],[454],{"type":24,"value":455},"return_dict_in_generate=True,",{"type":24,"value":428},{"type":18,"tag":269,"props":458,"children":460},{"className":459},[],[461],{"type":24,"value":462},"max_new_tokens=3,``)",{"type":18,"tag":26,"props":464,"children":465},{},[466],{"type":24,"value":467},"Greedy Search（贪心搜索）是生成任务中最简单的解码策略，每次选择当前概率最高的词作为输出。虽然计算效率高，但可能陷入局部最优，导致生成结果不够多样或质量不高。以下是 Greedy Search 策略下 `model.generate()` 的常用参数设置建议。",{"type":18,"tag":264,"props":469,"children":471},{"code":470},"### 1. **核心参数**\n#### (1) `max_length`\n- **作用**：生成序列的最大长度。\n- **建议**：根据任务需求设置，避免过长或过短。\n  - 例如，文本摘要任务可以设置为 50-100，机器翻译任务可以设置为 100-150。\n\n#### (2) `min_length`\n- **作用**：生成序列的最小长度。\n- **建议**：避免生成过短的结果，例如在摘要任务中可以设置为 10-20。\n\n#### (3) `num_return_sequences`\n- **作用**：返回的最终序列数量。\n- **建议**：Greedy Search 每次只能生成一个最优序列，因此通常设置为 1。\n\n### 2. **多样性控制参数**\n#### (1) `no_repeat_ngram_size`\n- **作用**：避免生成重复的 n-gram。\n- **建议**：通常设置为 2 或 3，避免生成重复内容。\n\n#### (2) `temperature`\n- **作用**：控制生成结果的随机性。\n  - `temperature \u003C 1`：更确定性的输出。\n  - `temperature > 1`：更多样化的输出。\n- **建议**：Greedy Search 通常设置为 1.0（默认值），因为贪心策略本身是确定性的。\n\n#### (3) `top_k` 和 `top_p`\n- **作用**：限制候选词的范围。\n- **建议**：Greedy Search 通常不使用这些参数，因为每次只选择概率最高的词。\n\n### 3. **其他参数**\n#### (1) `do_sample`\n- **作用**：是否使用采样策略。\n- **建议**：Greedy Search 是确定性的，因此通常设置为 `False`。\n\n#### (2) `early_stopping`\n- **作用**：是否在生成满足条件的序列时提前停止。\n- **建议**：可以设置为 `True`，以节省计算资源。\n",[472],{"type":18,"tag":269,"props":473,"children":474},{"__ignoreMap":7},[475],{"type":24,"value":470},{"type":18,"tag":26,"props":477,"children":478},{},[479],{"type":24,"value":480},"**示例代码**",{"type":18,"tag":26,"props":482,"children":483},{},[484],{"type":24,"value":485},"以下是一个典型的 Greedy Search 参数设置示例：",{"type":18,"tag":264,"props":487,"children":489},{"code":488},"output = model.generate(\n    input_ids,                      # 输入序列\n    max_length=50,                  # 最大长度\n    min_length=10,                  # 最小长度\n    num_return_sequences=1,         # 返回的序列数量（Greedy Search 只能返回 1 个）\n    no_repeat_ngram_size=2,         # 避免重复 n-gram\n    early_stopping=True,            # 提前停止\n    do_sample=False,                # 不使用采样\n    temperature=1.0,                # 温度参数（默认值）\n)\n",[490],{"type":18,"tag":269,"props":491,"children":492},{"__ignoreMap":7},[493],{"type":24,"value":488},{"type":18,"tag":26,"props":495,"children":496},{},[497],{"type":18,"tag":30,"props":498,"children":499},{},[500],{"type":24,"value":501},"Beam Search: Select the best probable response｜波束搜索：选择最可能的响应",{"type":18,"tag":26,"props":503,"children":504},{},[505],{"type":24,"value":506},"beam 在物理领域有光束的含义，简单理解为“多个”就行。它总是保留当前概率最大的num_beams个序列（注意不是词，考虑了词组、短语）（如果num_beams=1就变成贪心解码了）。束搜索策略(beam search)本质上也是一个贪心解码策略 (greedy decoding) ，所以无法保证一定可以得到最好的结果，生成的文本通常较为简洁和连贯，但缺乏多样性。",{"type":18,"tag":264,"props":508,"children":510},{"code":509},"generation_output = model.generate(\n    input_ids=input_ids,\n    num_beams = 3,\n    num_return_sequences=3,\n    return_dict_in_generate=True,\n    max_new_tokens=3,\n)\n",[511],{"type":18,"tag":269,"props":512,"children":513},{"__ignoreMap":7},[514],{"type":24,"value":509},{"type":18,"tag":26,"props":516,"children":517},{},[518],{"type":24,"value":519},"**Beam Search** 是一种常用的解码策略，广泛应用于机器翻译、文本生成等任务。它通过保留多个候选序列（由 `num_beams` 控制）来优化生成结果的质量。以下是 Beam Search 策略下 `model.generate()` 的常用参数设置建议。",{"type":18,"tag":264,"props":521,"children":523},{"code":522},"### 1. **核心参数**\n#### (1) `num_beams`\n- **作用**：控制每次保留的候选序列数量。\n- **建议**：\n  - 小任务：**5-10**\n  - 中等任务：**10-20**\n  - 大任务：**20-50**\n- **注意**：\n  - 值越大，生成质量通常越高，但计算开销也越大。\n  - 值越小，生成速度越快，但可能陷入局部最优。\n\n#### (2) `max_length`\n- **作用**：生成序列的最大长度。\n- **建议**：根据任务需求设置，避免过长或过短。\n  - 例如，文本摘要任务可以设置为 **50-100**，机器翻译任务可以设置为 **100-150**。\n\n#### (3) `min_length`\n- **作用**：生成序列的最小长度。\n- **建议**：避免生成过短的结果，例如在摘要任务中可以设置为 **10-20**。\n\n#### (4) `num_return_sequences`\n- **作用**：返回的最终序列数量。\n- **建议**：根据需求设置，通常设置为 **1-5**。\n- **注意**：`num_return_sequences` 必须小于或等于 `num_beams`。\n\n---\n\n### 2. **多样性控制参数**\n#### (1) `length_penalty`\n- **作用**：控制生成序列长度的偏好。\n  - `length_penalty > 1`：鼓励生成长序列。\n  - `length_penalty \u003C 1`：鼓励生成短序列。\n  - `length_penalty = 1`：无偏好。\n- **建议**：通常设置为 **0.6-2.0**，默认值为 **1.0**。\n\n#### (2) `no_repeat_ngram_size`\n- **作用**：避免生成重复的 n-gram。\n- **建议**：通常设置为 **2 或 3**，避免生成重复内容。\n\n---\n\n### 3. **其他参数**\n#### (1) `early_stopping`\n- **作用**：是否在生成满足条件的序列时提前停止。\n- **建议**：\n  - 如果希望生成固定数量的序列，设置为 **`False`**。\n  - 如果希望尽早停止以节省计算资源，设置为 **`True`**。\n\n#### (2) `do_sample`\n- **作用**：是否使用采样策略。\n- **建议**：Beam Search 通常设置为 **`False`**。\n",[524],{"type":18,"tag":269,"props":525,"children":526},{"__ignoreMap":7},[527],{"type":24,"value":522},{"type":18,"tag":26,"props":529,"children":530},{},[531],{"type":24,"value":480},{"type":18,"tag":26,"props":533,"children":534},{},[535],{"type":24,"value":536},"以下是一个典型的 Beam Search 参数设置示例：",{"type":18,"tag":264,"props":538,"children":540},{"code":539},"output = model.generate(\n    input_ids,                      # 输入序列\n    max_length=50,                  # 最大长度\n    min_length=10,                  # 最小长度\n    num_beams=10,                   # Beam 数量\n    length_penalty=1.2,             # 长度偏好\n    no_repeat_ngram_size=2,         # 避免重复 n-gram\n    num_return_sequences=3,         # 返回的序列数量\n    early_stopping=True,            # 提前停止\n    do_sample=False,                # 不使用采样\n)\n",[541],{"type":18,"tag":269,"props":542,"children":543},{"__ignoreMap":7},[544],{"type":24,"value":539},{"type":18,"tag":264,"props":546,"children":548},{"code":547},"参数设置总结\n| 参数                  | 建议值/范围                  | 说明                                                                 |\n|-----------------------|------------------------------|----------------------------------------------------------------------|\n| `num_beams`           | 5-50                         | 控制候选序列数量，值越大生成质量越高，但计算开销越大。               |\n| `max_length`          | 任务相关（如 50-150）        | 生成序列的最大长度，根据任务需求设置。                               |\n| `min_length`          | 任务相关（如 10-20）         | 生成序列的最小长度，避免生成过短的结果。                             |\n| `num_return_sequences`| 1-5                          | 返回的序列数量，根据需求设置。                                       |\n| `length_penalty`      | 0.6-2.0                      | 控制生成序列长度的偏好，值越大越鼓励生成长序列。                     |\n| `no_repeat_ngram_size`| 2 或 3                       | 避免生成重复的 n-gram。                                              |\n| `early_stopping`      | `True` 或 `False`            | 是否提前停止生成，以节省计算资源。                                   |\n| `do_sample`           | `False`                      | Beam Search 通常不使用采样策略。                                     |\n",[549],{"type":18,"tag":269,"props":550,"children":551},{"__ignoreMap":7},[552],{"type":24,"value":547},{"type":18,"tag":26,"props":554,"children":555},{},[556],{"type":18,"tag":30,"props":557,"children":558},{},[559],{"type":18,"tag":30,"props":560,"children":561},{},[562],{"type":24,"value":563},"Temperature: Shrink or enlarge probabilities｜温度：缩小或扩大概率",{"type":18,"tag":26,"props":565,"children":566},{},[567],{"type":24,"value":568},"Temperature 是控制生成文本多样性和确定性的重要参数，通过调整 Temperature，可以根据任务需求优化生成文本的质量。",{"type":18,"tag":26,"props":570,"children":571},{},[572],{"type":24,"value":573},"• 低温度：生成结果更加确定性，适合需要准确性的任务。",{"type":18,"tag":26,"props":575,"children":576},{},[577],{"type":24,"value":578},"• 高温度：生成结果更加多样性，适合需要创造性的任务。",{"type":18,"tag":264,"props":580,"children":582},{"code":581},"import torch\nlogits = torch.tensor([[0.5, 1.2, -1.0, 0.1]])\n# 无temperature\nprobs = torch.softmax(logits, dim=-1)\n# temperature low 0.5\nprobs_low = torch.softmax(logits / 0.5, dim=-1)\n# temperature high 2\nprobs_high = torch.softmax(logits / 2, dim=-1)\n\nprint(f\"probs:{probs}\")\nprint(f\"probs_low:{probs_low}\")\nprint(f\"probs_high:{probs_high}\")\n",[583],{"type":18,"tag":269,"props":584,"children":585},{"__ignoreMap":7},[586],{"type":24,"value":581},{"type":18,"tag":26,"props":588,"children":589},{},[590],{"type":24,"value":591},"logits 是一个包含四个元素的向量 [0.5, 1.2, -1.0, 0.1],这些值表示每个token的原始得分。",{"type":18,"tag":26,"props":593,"children":594},{},[595],{"type":24,"value":596},"计算过程：",{"type":18,"tag":26,"props":598,"children":599},{},[600],{"type":24,"value":601},"•无temperature：使用 softmax 函数将 logits 转换为概率分布,axis=-1 表示沿着最后一个轴进行操作。",{"type":18,"tag":26,"props":603,"children":604},{},[605],{"type":24,"value":606},"• temperature降低0.5：将 logits 除以 0.5，然后应用 softmax,低温度会放大差异，使得高分值的token概率更高，低分值的token概率更低。",{"type":18,"tag":26,"props":608,"children":609},{},[610],{"type":24,"value":611},"• temperature提升2：将 logits 除以 2，然后应用 softmax,高温度会减小差异，使得所有token的概率更加均匀。",{"type":18,"tag":264,"props":613,"children":615},{"code":614},"probs:tensor([[0.2559, 0.5154, 0.0571, 0.1716]])\nprobs_low:tensor([[0.1800, 0.7301, 0.0090, 0.0809]])\nprobs_high:tensor([[0.2695, 0.3825, 0.1273, 0.2207]])\n",[616],{"type":18,"tag":269,"props":617,"children":618},{"__ignoreMap":7},[619],{"type":24,"value":614},{"type":18,"tag":26,"props":621,"children":622},{},[623,625],{"type":24,"value":624},"**• 无temperature：**",{"type":18,"tag":30,"props":626,"children":627},{},[628],{"type":24,"value":629},"概率分布为 [0.2559, 0.5154, 0.0571, 0.1716]。",{"type":18,"tag":26,"props":631,"children":632},{},[633],{"type":18,"tag":30,"props":634,"children":635},{},[636],{"type":24,"value":637},"• temperature降低0.5****概率分布为 [0.1800, 0.7301, 0.0090, 0.0809]，可以看到，高分值的token（如 1.2）的概率显著增加，而低分值的token（如 -1.0）的概率显著减少，因此生成的结果可能性会变少，集中在更高概率的结果中。",{"type":18,"tag":26,"props":639,"children":640},{},[641,643],{"type":24,"value":642},"**• temperature提升2****概率分布为 [0.2695, 0.3825, 0.1273, 0.2207]，可以看到，所有token的概率变得更加均匀，**",{"type":18,"tag":30,"props":644,"children":645},{},[646],{"type":24,"value":647},"因此生成的结果可能性会更多。",{"type":18,"tag":26,"props":649,"children":650},{},[651],{"type":18,"tag":30,"props":652,"children":653},{},[654],{"type":24,"value":655},"Temperature Sampling（温度采样）是一种基于概率分布的生成策略，通过调整温度参数（`temperature`）控制生成结果的随机性和多样性。相比于 Greedy Search 和 Beam Search，Temperature Sampling 更适合需要多样性和创造性的任务（如故事生成、对话生成等）。以下是 Temperature Sampling 策略下 `model.generate()` 的常用参数设置建议。",{"type":18,"tag":264,"props":657,"children":659},{"code":658},"### 1. **核心参数**\n#### (1) `temperature`\n- **作用**：控制生成结果的随机性。\n  - `temperature \u003C 1`：更确定性的输出，倾向于选择高概率的词。\n  - `temperature > 1`：更多样化的输出，倾向于选择低概率的词。\n  - `temperature = 1`：无偏好的原始概率分布。\n- **建议**：\n  - 需要高质量、确定性输出时，设置为 0.7-1.0。\n  - 需要多样性和创造性时，设置为 1.0-1.5。\n  - 避免设置过高（如 >1.5），否则可能导致生成结果不连贯。\n\n#### (2) `max_length`\n- **作用**：生成序列的最大长度。\n- **建议**：根据任务需求设置，避免过长或过短。\n  - 例如，文本摘要任务可以设置为 50-100，对话生成任务可以设置为 100-150。\n\n#### (3) `min_length`\n- **作用**：生成序列的最小长度。\n- **建议**：避免生成过短的结果，例如在摘要任务中可以设置为 10-20。\n\n#### (4) `num_return_sequences`\n- **作用**：返回的最终序列数量。\n- **建议**：根据需求设置，通常设置为 1-5。\n\n---\n\n### 2. **多样性控制参数**\n#### (1) `top_k`\n- **作用**：限制候选词的范围，仅从概率最高的 `top_k` 个词中采样。\n- **建议**：通常设置为 50-100，避免选择概率极低的词。\n\n#### (2) `top_p`（Nucleus Sampling）\n- **作用**：仅从累积概率超过 `top_p` 的词中采样。\n- **建议**：通常设置为 0.9-0.95，与 `top_k` 结合使用效果更好。\n\n#### (3) `no_repeat_ngram_size`\n- **作用**：避免生成重复的 n-gram。\n- **建议**：通常设置为 2 或 3，避免生成重复内容。\n\n---\n\n### 3. **其他参数**\n#### (1) `do_sample`\n- **作用**：是否使用采样策略。\n- **建议**：Temperature Sampling 必须设置为 `True`。\n\n#### (2) `early_stopping`\n- **作用**：是否在生成满足条件的序列时提前停止。\n- **建议**：可以设置为 `True`，以节省计算资源。\n",[660],{"type":18,"tag":269,"props":661,"children":662},{"__ignoreMap":7},[663],{"type":24,"value":658},{"type":18,"tag":26,"props":665,"children":666},{},[667],{"type":24,"value":480},{"type":18,"tag":26,"props":669,"children":670},{},[671],{"type":24,"value":672},"以下是一个典型的Temperature Sampling参数设置示例：",{"type":18,"tag":264,"props":674,"children":676},{"code":675},"output = model.generate(\n    input_ids,                      # 输入序列\n    max_length=50,                  # 最大长度\n    min_length=10,                  # 最小长度\n    do_sample=True,                 # 启用采样\n    temperature=0.9,                # 温度参数\n    top_k=50,                       # 限制候选词范围\n    top_p=0.95,                     # Nucleus Sampling\n    no_repeat_ngram_size=2,         # 避免重复 n-gram\n    num_return_sequences=3,         # 返回的序列数量\n    early_stopping=True,            # 提前停止\n)\n",[677],{"type":18,"tag":269,"props":678,"children":679},{"__ignoreMap":7},[680],{"type":24,"value":675},{"type":18,"tag":26,"props":682,"children":683},{},[684],{"type":24,"value":685},"Top-K Sampling: Select top probable K tokens｜Top-K 采样：选择最可能的 K 个 token",{"type":18,"tag":26,"props":687,"children":688},{},[689],{"type":24,"value":690},"Top-K Sampling 做法很简单，从概率最大的K个token中采样，避免稀奇古怪的输出。可以生成更多样化的文本，但可能导致一些不连贯的内容。",{"type":18,"tag":264,"props":692,"children":694},{"code":693},"import torch\nfilter_value = -float(\"Inf\")\ntop_k = 2\nprobs = torch.tensor([[0.2559, 0.5154, 0.0571, 0.1716]])\nindices_to_remove = probs \u003C torch.topk(probs, top_k)[0][..., -1, None]\nnew_probs = probs.masked_fill(indices_to_remove, filter_value)\nprint(\"new_probs:\", new_probs)\n",[695],{"type":18,"tag":269,"props":696,"children":697},{"__ignoreMap":7},[698],{"type":24,"value":693},{"type":18,"tag":26,"props":700,"children":701},{},[702],{"type":18,"tag":30,"props":703,"children":704},{},[705],{"type":18,"tag":30,"props":706,"children":707},{},[708],{"type":24,"value":709},"打印结果",{"type":18,"tag":264,"props":711,"children":713},{"code":712},"new_probs: tensor([[0.2559, 0.5154,   -inf,   -inf]])\n\n这个打印结果展示了经过 top-k 或 top-p 采样处理后的概率分布。这个张量的含义：\ntensor([[0.2559, 0.5154, -inf, -inf]])\n1. 格式说明：\n   - 这是一个形状为 [1, 4] 的二维张量\n   - 包含4个值，代表4个不同标记的概率分数\n2. 具体值的含义：\n   - `0.2559`: 第一个标记的概率约为25.59%\n   - `0.5154`: 第二个标记的概率约为51.54%\n   - `-inf`: 第三和第四个标记被过滤掉了（概率被设置为负无穷）\n   - 只保留了概率最高的两个选项，其他选项被屏蔽掉\n3. 为什么会有 `-inf`：\n   - 这通常是应用了 top-k 或 top-p 采样策略的结果\n   - `-inf` 表示这些标记在采样时会被完全排除\n   - 只有非 `-inf` 的标记（这里是前两个）会被考虑进行采样\n4. 实际效果：\n   - 在后续采样中，只会从概率为0.2559和0.5154的两个标记中选择\n   - 这样可以避免选到低概率或不合适的标记，提高生成质量\n这种处理方式是常见的文本生成策略，通过限制可选的标记数量来提高生成文本的质量。\n",[714],{"type":18,"tag":269,"props":715,"children":716},{"__ignoreMap":7},[717],{"type":24,"value":712},{"type":18,"tag":26,"props":719,"children":720},{},[721],{"type":24,"value":722},"Top-K Sampling是一种常用的生成策略，通过限制模型在生成每个词时仅从概率最高的 `k` 个候选词中选择，从而在生成质量和多样性之间取得平衡。",{"type":18,"tag":26,"props":724,"children":725},{},[726],{"type":24,"value":727},"以下是 Top-K Sampling 策略下 `model.generate()` 的常用参数设置建议。",{"type":18,"tag":264,"props":729,"children":731},{"code":730},"### 1. **核心参数**\n#### (1) `top_k`\n- **作用**：限制候选词的范围，仅从概率最高的 `k` 个词中采样。\n- **建议**：\n  - 小任务：**10-50**\n  - 中等任务：**50-100**\n  - 大任务：**100-200**\n- **注意**：\n  - `k` 值越小，生成结果越确定，但可能缺乏多样性。\n  - `k` 值越大，生成结果越多，但可能包含低质量的候选词。\n\n#### (2) `max_length`\n- **作用**：生成序列的最大长度。\n- **建议**：根据任务需求设置，避免过长或过短。\n  - 例如，文本摘要任务可以设置为 **50-100**，对话生成任务可以设置为 **100-150**。\n\n#### (3) `min_length`\n- **作用**：生成序列的最小长度。\n- **建议**：避免生成过短的结果，例如在摘要任务中可以设置为 **10-20**。\n\n#### (4) `num_return_sequences`\n- **作用**：返回的最终序列数量。\n- **建议**：根据需求设置，通常设置为 **1-5**。\n\n---\n\n### 2. **多样性控制参数**\n#### (1) `temperature`\n- **作用**：控制生成结果的随机性。\n  - `temperature \u003C 1`：更确定性的输出。\n  - `temperature > 1`：更多样化的输出。\n  - `temperature = 1`：无偏好的原始概率分布。\n- **建议**：通常设置为 **0.7-1.0**，与 `top_k` 结合使用效果更好。\n\n#### (2) `no_repeat_ngram_size`\n- **作用**：避免生成重复的 n-gram。\n- **建议**：通常设置为 **2 或 3**，避免生成重复内容。\n\n---\n\n### 3. **其他参数**\n#### (1) `do_sample`\n- **作用**：是否使用采样策略。\n- **建议**：Top-K Sampling 必须设置为 **`True`**。\n\n#### (2) `early_stopping`\n- **作用**：是否在生成满足条件的序列时提前停止。\n- **建议**：可以设置为 **`True`**，以节省计算资源。\n",[732],{"type":18,"tag":269,"props":733,"children":734},{"__ignoreMap":7},[735],{"type":24,"value":730},{"type":18,"tag":26,"props":737,"children":738},{},[739],{"type":24,"value":480},{"type":18,"tag":26,"props":741,"children":742},{},[743],{"type":24,"value":744},"以下是一个典型的 Top-K Sampling 参数设置示例：",{"type":18,"tag":264,"props":746,"children":748},{"code":747},"output = model.generate(\n    input_ids,                      # 输入序列\n    max_length=50,                  # 最大长度\n    min_length=10,                  # 最小长度\n    do_sample=True,                 # 启用采样\n    top_k=50,                       # 限制候选词范围\n    temperature=0.9,                # 温度参数\n    no_repeat_ngram_size=2,         # 避免重复 n-gram\n    num_return_sequences=3,         # 返回的序列数量\n    early_stopping=True,            # 提前停止\n)\n",[749],{"type":18,"tag":269,"props":750,"children":751},{"__ignoreMap":7},[752],{"type":24,"value":747},{"type":18,"tag":264,"props":754,"children":756},{"code":755},"参数设置总结\n| 参数                  | 建议值/范围                  | 说明                                                                 |\n|-----------------------|------------------------------|----------------------------------------------------------------------|\n| `top_k`               | 50-100                       | 限制候选词范围，避免选择概率极低的词。                               |\n| `max_length`          | 任务相关（如 50-150）        | 生成序列的最大长度，根据任务需求设置。                               |\n| `min_length`          | 任务相关（如 10-20）         | 生成序列的最小长度，避免生成过短的结果。                             |\n| `num_return_sequences`| 1-5                          | 返回的序列数量，根据需求设置。                                       |\n| `temperature`         | 0.7-1.0                      | 控制随机性，值越小越确定，值越大越多样。                             |\n| `no_repeat_ngram_size`| 2 或 3                       | 避免生成重复的 n-gram。                                              |\n| `do_sample`           | `True`                       | 启用采样策略。                                                       |\n| `early_stopping`      | `True` 或 `False`            | 是否提前停止生成，以节省计算资源。                                   |\n",[757],{"type":18,"tag":269,"props":758,"children":759},{"__ignoreMap":7},[760],{"type":24,"value":755},{"type":18,"tag":242,"props":762,"children":764},{"id":763},"nucleus-sampling-dynamically-choose-the-number-of-k-sort-of核采样top-p动态选择k的数量",[765],{"type":24,"value":766},"Nucleus Sampling: Dynamically choose the number of K (sort of)｜核采样Top-P：动态选择K的数量",{"type":18,"tag":26,"props":768,"children":769},{},[770],{"type":24,"value":771},"Nucleus Sampling也就是Top-P采样，其实是引入累计函数分布的Top-K采样。从一个概率池子（可以联系二八定律，20%的词贡献了80%的概率）中采样，设置累计概率阈值，比如p=0.92, 至于p中包含了多少个词，是根据t情况而变化的，这样就形成了是自适应选词的效果。",{"type":18,"tag":26,"props":773,"children":774},{},[775],{"type":24,"value":776},"当候选词的概率分布高度集中时，Top-p和Top-k采样行为相似。当候选词的概率分布较为均匀时，Top-p采样会自动增加候选词的数量，以确保累积概率达到设定的阈值p。这使得生成的文本更具多样性，而Top-k采样可能会因为固定的选择数量而错过一些有潜力的候选词。",{"type":18,"tag":264,"props":778,"children":780},{"code":779},"import torch\n# 样例：probs: tensor([[0.2559, 0.5154, 0.0571, 0.1716]])\nprobs = torch.tensor([[0.2559, 0.5154, 0.0571, 0.1716]])\n#- 这是原始的概率分布\n#- 四个数字分别代表四个不同标记的概率\n#- 总和为1（0.2559 + 0.5154 + 0.0571 + 0.1716 = 1）\n\n# 第一步进行排序\nprobs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True)\n# 结果\nprobs_sort: tensor([[0.5154, 0.2559, 0.1716, 0.0571]])\nprobs_idx: tensor([[1, 0, 3, 2]])\n#- `probs_sort`: 概率按从大到小排序\n#- `probs_idx`: 对应的原始位置索引\n#  - 1: 0.5154 原来在位置1\n#  - 0: 0.2559 原来在位置0\n#  - 3: 0.1716 原来在位置3\n#  - 2: 0.0571 原来在位置2\n\n# 第二步概率的累积和\nprobs_sum = torch.cumsum(probs_sort, dim=-1)\n# 结果\nprobs_sum: tensor([[0.5154, 0.7713, 0.9429, 1.0000]])\n#- 0.5154 = 0.5154\n#- 0.7713 = 0.5154 + 0.2559\n#- 0.9429 = 0.5154 + 0.2559 + 0.1716\n#- 1.0000 = 0.5154 + 0.2559 + 0.1716 + 0.0571\n\n# 第三步找到第一个大于阈值p的位置，假设p=0.9，并将后面的概率值置为0：\nmask = probs_sum - probs_sort > p\nprobs_sort[mask] = 0.0\n# 结果\nprobs_sort: tensor([[0.5154, 0.2559, 0.1716, 0.0000]])\n#- 保留了累积概率和小于0.9的值\n#- 最后一个值被置为0，因为加上它会超过阈值0.9\n\n# 第四步复原原序列\nnew_probs = probs_sort.scatter(1, probs_idx, probs_sort)\n# 结果\nnew_probs: tensor([[0.2559, 0.5154, 0.0000, 0.1716]])\n#- 使用`scatter`操作将排序后的概率值放回原始位置\n#- 0.2559 回到位置0\n#- 0.5154 回到位置1\n#- 0.0000 回到位置2\n#- 0.1716 回到位置3\n\n# 注：在真实实现中一般会把舍弃的概率置为-inf，即\nzero_indices = (new_probs == 0)\nnew_probs[zero_indices] = float('-inf')\n# 结果\nnew_probs: tensor([[0.2559, 0.5154, -inf, 0.1716]])\n#- 将概率为0的位置替换为负无穷\n#- 这样在后续采样中会完全排除这些位置\n#- 只会从非-inf的位置中进行采样\n\n# 完整代码\ndef sample_top_p(probs, p):\n    probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True)\n    probs_sum = torch.cumsum(probs_sort, dim=-1)\n    mask = probs_sum - probs_sort > p\n    probs_sort[mask] = 0.0\n    new_probs = probs_sort.scatter(1, probs_idx, probs_sort)\n    zero_indices = (new_probs == 0)\n    new_probs[zero_indices] = float('-inf')\n    return new_probs\n",[781],{"type":18,"tag":269,"props":782,"children":783},{"__ignoreMap":7},[784],{"type":24,"value":779},{"type":18,"tag":26,"props":786,"children":787},{},[788],{"type":24,"value":789},"这个过程实现了 nucleus sampling (top-p)，通过累积概率和的方式来过滤掉低概率的选项，保持文本生成的多样性的同时确保质量。Top-p（nucleus）采样通过累积概率实现K值的自适应变化，是一种非常精巧的方法，具有以下优势：",{"type":18,"tag":26,"props":791,"children":792},{},[793],{"type":24,"value":794},"1、自适应灵活性：通过选择累积概率超过阈值p的最小单词集合，Top-p采样能够根据概率分布动态调整候选词的数量（K值）。这种自适应性使得模型在概率分布较为集中时考虑较少的高概率词，而在分布较为均匀时考虑更多的词。",{"type":18,"tag":26,"props":796,"children":797},{},[798],{"type":24,"value":799},"2、自然流畅性和连贯性：Top-p采样的自适应特性确保了生成文本的连贯性和自然流畅性。它避免了固定Top-k采样的僵化性，后者可能会不必要地包含低概率词。",{"type":18,"tag":26,"props":801,"children":802},{},[803],{"type":24,"value":804},"3、高效的资源利用：虽然对概率进行排序可能计算量较大，但现代GPU能够高效处理这一任务。此外，近似排序或堆维护等优化方法可以进一步提升性能，而不会显著影响准确性。",{"type":18,"tag":26,"props":806,"children":807},{},[808],{"type":24,"value":809},"4、多样性与创造性的控制：通过调整p值，可以平衡生成文本的多样性和连贯性。较高的p值包含更多词，增加多样性；而较低的p值则限制选择范围，专注于更高概率的词。",{"type":18,"tag":26,"props":811,"children":812},{},[813],{"type":24,"value":814},"在具体实现中，Top-p采样的过程包括生成概率分布、对概率进行排序、计算累积和、确定超过阈值的点，并从选定的子集中进行采样。这种方法在GPT-2论文中被提出，因其在处理不同上下文场景时的自适应性和灵活性而受到青睐。",{"type":18,"tag":26,"props":816,"children":817},{},[818],{"type":24,"value":819},"Top-P Sampling（也称为 **Nucleus Sampling**）是一种常用的生成策略，通过限制模型在生成每个词时仅从累积概率超过 `p` 的候选词中选择，从而在生成质量和多样性之间取得平衡。",{"type":18,"tag":26,"props":821,"children":822},{},[823],{"type":24,"value":824},"以下是 Top-P Sampling 策略下 `model.generate()` 的常用参数设置建议。",{"type":18,"tag":264,"props":826,"children":828},{"code":827},"### 1. **核心参数**\n#### (1) `top_p`\n- **作用**：限制候选词的范围，仅从累积概率超过 `p` 的词中采样。\n- **建议**：\n  - 通常设置为 **0.9-0.95**。\n  - 较小的值（如 0.8）会限制候选词范围，生成结果更确定。\n  - 较大的值（如 0.98）会增加候选词范围，生成结果更多样。\n\n#### (2) `max_length`\n- **作用**：生成序列的最大长度。\n- **建议**：根据任务需求设置，避免过长或过短。\n  - 例如，文本摘要任务可以设置为 **50-100**，对话生成任务可以设置为 **100-150**。\n\n#### (3) `min_length`\n- **作用**：生成序列的最小长度。\n- **建议**：避免生成过短的结果，例如在摘要任务中可以设置为 **10-20**。\n\n#### (4) `num_return_sequences`\n- **作用**：返回的最终序列数量。\n- **建议**：根据需求设置，通常设置为 **1-5**。\n\n---\n\n### 2. **多样性控制参数**\n#### (1) `temperature`\n- **作用**：控制生成结果的随机性。\n  - `temperature \u003C 1`：更确定性的输出。\n  - `temperature > 1`：更多样化的输出。\n  - `temperature = 1`：无偏好的原始概率分布。\n- **建议**：通常设置为 **0.7-1.0**，与 `top_p` 结合使用效果更好。\n\n#### (2) `no_repeat_ngram_size`\n- **作用**：避免生成重复的 n-gram。\n- **建议**：通常设置为 **2 或 3**，避免生成重复内容。\n\n---\n\n### 3. **其他参数**\n#### (1) `do_sample`\n- **作用**：是否使用采样策略。\n- **建议**：Top-P Sampling 必须设置为 **`True`**。\n\n#### (2) `early_stopping`\n- **作用**：是否在生成满足条件的序列时提前停止。\n- **建议**：可以设置为 **`True`**，以节省计算资源。\n",[829],{"type":18,"tag":269,"props":830,"children":831},{"__ignoreMap":7},[832],{"type":24,"value":827},{"type":18,"tag":26,"props":834,"children":835},{},[836],{"type":24,"value":480},{"type":18,"tag":26,"props":838,"children":839},{},[840],{"type":24,"value":744},{"type":18,"tag":264,"props":842,"children":844},{"code":843},"\noutput = model.generate(\n    input_ids,                      # 输入序列\n    max_length=50,                  # 最大长度\n    min_length=10,                  # 最小长度\n    do_sample=True,                 # 启用采样\n    top_p=0.95,                     # 限制候选词范围\n    temperature=0.9,                # 温度参数\n    no_repeat_ngram_size=2,         # 避免重复 n-gram\n    num_return_sequences=3,         # 返回的序列数量\n    early_stopping=True,            # 提前停止\n)\n参数设置总结\n| 参数                  | 建议值/范围                  | 说明                                                                 |\n|-----------------------|------------------------------|----------------------------------------------------------------------|\n| `top_p`               | 0.9-0.95                     | 限制候选词范围，仅从累积概率超过 `p` 的词中采样。                    |\n| `max_length`          | 任务相关（如 50-150）        | 生成序列的最大长度，根据任务需求设置。                               |\n| `min_length`          | 任务相关（如 10-20）         | 生成序列的最小长度，避免生成过短的结果。                             |\n| `num_return_sequences`| 1-5                          | 返回的序列数量，根据需求设置。                                       |\n| `temperature`         | 0.7-1.0                      | 控制随机性，值越小越确定，值越大越多样。                             |\n| `no_repeat_ngram_size`| 2 或 3                       | 避免生成重复的 n-gram。                                              |\n| `do_sample`           | `True`                       | 启用采样策略。                                                       |\n| `early_stopping`      | `True` 或 `False`            | 是否提前停止生成，以节省计算资源。                                   |\n",[845],{"type":18,"tag":269,"props":846,"children":847},{"__ignoreMap":7},[848],{"type":24,"value":843},{"type":18,"tag":398,"props":850,"children":852},{"id":851},"参考文章",[853],{"type":18,"tag":30,"props":854,"children":855},{},[856],{"type":24,"value":851},{"type":18,"tag":26,"props":858,"children":859},{},[860,862],{"type":24,"value":861},"[1] ",{"type":18,"tag":152,"props":863,"children":866},{"href":864,"rel":865},"https://www.zhihu.com/tardis/zm/art/647813179?source%5C_id=1005",[156],[867],{"type":24,"value":868},"https://www.zhihu.com/tardis/zm/art/647813179?source\\_id=1005",{"title":7,"searchDepth":870,"depth":870,"links":871},4,[872,874,879],{"id":244,"depth":873,"text":247},3,{"id":7,"depth":875,"text":7,"children":876},2,[877,878],{"id":403,"depth":873,"text":412},{"id":763,"depth":873,"text":766},{"id":851,"depth":875,"text":851},"markdown","content:technology-blogs:zh:3606.md","content","technology-blogs/zh/3606.md","technology-blogs/zh/3606","md",1776506132063]