[{"data":1,"prerenderedAt":227},["ShallowReactive",2],{"content-query-dJlEuV30nQ":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":221,"_id":222,"_source":223,"_file":224,"_stem":225,"_extension":226},"/technology-blogs/zh/3143","zh",false,"","黄金时代持续疯狂输出，ChatGPT-4o升级版本闪亮登场~（资讯更新系列）","Hello，小麦（小Mi），请播报近期AI类新闻~","2024-05-31","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/11/30/d45ea1fd209e41d0afd80ca4c1ad8ccb.png","technology-blogs","基础知识",{"type":15,"children":16,"toc":218},"root",[17,25,31,36,41,46,51,56,61,66,71,76,81,86,94,99,104,109,114,119,124,129,136,141,150,157,162,169,174,181,186,191,200,209],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"黄金时代持续疯狂输出chatgpt-4o升级版本闪亮登场资讯更新系列",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":28},"p",{},[29],{"type":24,"value":30},"A：Hello，小麦（小Mi），请播报近期AI类新闻~",{"type":18,"tag":26,"props":32,"children":33},{},[34],{"type":24,"value":35},"B：继ChatGPT 4.0之后，OpenAI推出ChatGPT-4o升级版本，并支持免费适用......",{"type":18,"tag":26,"props":37,"children":38},{},[39],{"type":24,"value":40},"A：等等，新推的版本？小麦（小Mi），请详细介绍一下ChatGPT-4o。",{"type":18,"tag":26,"props":42,"children":43},{},[44],{"type":24,"value":45},"B：o代表Omnimodel(代表全能模型之意)，ChatGPT-4o 旨在提供更加自然和高效的人机交互体验。和之前的版本相比，ChatGPT-4o 新特性支持多模态交互：不仅能够处理文本，还能同时接受音频和图像输入，并生成相应的输出；支持情感理解，可以通过分析用户的表情和语气，判断其情绪......",{"type":18,"tag":26,"props":47,"children":48},{},[49],{"type":24,"value":50},"为了帮助大家更好地了解大模型，今天MindSpore论坛坛主将直接整期着重介绍大模型第一梯队主力军ChatGPT及其最新版本ChatGPT-4o~首先我们将根据其发布时间简单介绍下ChatGPT各版本信息。",{"type":18,"tag":26,"props":52,"children":53},{},[54],{"type":24,"value":55},"ChatGPT是OpenAI开发的一种人工智能语言模型，它有几个不同的版本，每个版本都在前一个版本的基础上进行了改进和扩展。以下是目前已知的ChatGPT的版本：",{"type":18,"tag":26,"props":57,"children":58},{},[59],{"type":24,"value":60},"1. GPT-1：这是最初的版本，使用了Transformer神经网络架构，并在大量文本数据上进行预训练。它采用了自回归模型来预测下一个词的出现概率，并使用beam search算法生成文本。",{"type":18,"tag":26,"props":62,"children":63},{},[64],{"type":24,"value":65},"2. GPT-2：在GPT-1的基础上进行了改进，增加了更多的参数（从1.5亿增加到15亿），以提高性能。GPT-2能够生成更长的文本，更好地处理对话，并具有更好的通用性。",{"type":18,"tag":26,"props":67,"children":68},{},[69],{"type":24,"value":70},"3. GPT-3：拥有1750亿个参数，能够非常准确地执行语言翻译、问答和自动文本摘要等任务。GPT-3是一个开放模型，可供用户访问并进行迭代和改进。",{"type":18,"tag":26,"props":72,"children":73},{},[74],{"type":24,"value":75},"4. GPT-3.5：基于GPT-3模型的改进版，在结构和数据集上进行了优化，使得其在聊天任务上表现更好，目前访问ChatGPT官网可以免费使用该版本。",{"type":18,"tag":26,"props":77,"children":78},{},[79],{"type":24,"value":80},"5. GPT-4：ChatGPT4使用了更深、更大的模型，比ChatGPT3.5具有更强的语言理解和生成能力。此外，ChatGPT4采用了一种新的预训练方式，使得其可以更好地学习上下文信息，并且可以更好地利用大规模数据集进行训练。",{"type":18,"tag":26,"props":82,"children":83},{},[84],{"type":24,"value":85},"6. GPT-4o：OpenAI的一小步，人类”AI助理”的一大步。与现有模型相比，GPT-4o 的最大进步在于，它可以实时对音频、视觉和文本进行推理——换句话说，它让ChatGPT实现了真正意义上的多模态交互。",{"type":18,"tag":26,"props":87,"children":88},{},[89],{"type":18,"tag":90,"props":91,"children":93},"img",{"alt":7,"src":92},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/05/31/8689e30cf65842bd81b6df0e90e77b00.png",[],{"type":18,"tag":26,"props":95,"children":96},{},[97],{"type":24,"value":98},"不难看出，目前疯狂被大家追捧的GPT版本一定是GPT-4o了！那么接下来MindSpore论坛论主就来好好带领大家了解一下GPT-4o的最新特性！",{"type":18,"tag":26,"props":100,"children":101},{},[102],{"type":24,"value":103},"1）OpenAI发布了名为GPT-4o的新模型，其中\"o\"代表\"Omni\"，意味着全面或全体。GPT-4o是一个多模态模型，能够处理文字、语音、图片和视频，并且能够实时推理音频和视觉信息。",{"type":18,"tag":26,"props":105,"children":106},{},[107],{"type":24,"value":108},"2）性能提升：GPT-4o在文本和代码能力上与GPT-4 Turbo相当，但在API响应速度上更快，价格更便宜50%。具体来说，速度提高了2倍，价格减半，限制速率提高了5倍。",{"type":18,"tag":26,"props":110,"children":111},{},[112],{"type":24,"value":113},"3）响应快：基于GPT-4o的新ChatGPT（被称为Moss）在语音对话中几乎没有延迟，能够实时响应，甚至理解情绪和非语言的声音，如喘息和呼吸声。它还能够模拟不同的声音，包括机器人和唱歌的声音。",{"type":18,"tag":26,"props":115,"children":116},{},[117],{"type":24,"value":118},"4）实时交互：能够进行直接的语音输入和输出，不再需要语音到文本的转换，5）视觉能力：还具备视觉能力，能够通过摄像头实时观察并理解所发生的事情。可以感知人的人的情绪，实时的和人对话。可以以视频的方式，感知我们写出来的数学题，并提供解决方式。同时，也可以辅助编程，简直是程序员的福音。",{"type":18,"tag":26,"props":120,"children":121},{},[122],{"type":24,"value":123},"具体来说：",{"type":18,"tag":26,"props":125,"children":126},{},[127],{"type":24,"value":128},"在5月14日的OpenAI春季发布会上，Mira Murati与团队成员Mark Chen和Barret Zoph共同展示了GPT-4o驱动的ChatGPT，突出了其在多种任务上的表现，特别是语音功能。简而言之，ChatGPT在搭载GPT-4o后，表现出了快速、全面和情感丰富的特点。用户在与ChatGPT交流时，可以随意打断对话，ChatGPT能够即时反应，避免了等待的尴尬。此外，当Mark表现出紧张和急促的呼吸时，ChatGPT能够察觉并建议他冷静下来，通过识别呼吸节奏来引导他进行深呼吸。",{"type":18,"tag":26,"props":130,"children":131},{},[132],{"type":18,"tag":90,"props":133,"children":135},{"alt":7,"src":134},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/05/31/a33ce8c757f747f7815c1544367865aa.png",[],{"type":18,"tag":26,"props":137,"children":138},{},[139],{"type":24,"value":140},"ChatGPT的即时翻译能力同样令人印象深刻，OpenAI团队在发布会上演示了英语和意大利语之间的无缝实时翻译，整个过程没有任何延迟。更有意思的是，ChatGPT在对话中不仅会使用情感丰富的语气词，还会与OpenAI团队进行幽默互动，表达感激之情。当它\"看到\"团队写下\"I love ChatGPT\"时，ChatGPT会以撒娇的语气回应，称赞他们”considerate”。",{"type":18,"tag":26,"props":142,"children":143},{},[144,148],{"type":18,"tag":90,"props":145,"children":147},{"alt":7,"src":146},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/05/31/766d13c686bf42b88c4f8f611a93d867.png",[],{"type":24,"value":149}," ChatGPT还具备与用户进行视频交流的能力。在一次演示中，Barret让ChatGPT尝试判断他的情绪状态。当他露出笑容时，ChatGPT立刻回应说：\"You look very happy, with a big smile on your face and a hint of excitement.(你看起来非常开心，笑容满面，还带着一丝激动)\"。",{"type":18,"tag":26,"props":151,"children":152},{},[153],{"type":18,"tag":90,"props":154,"children":156},{"alt":7,"src":155},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/05/31/f95eb9574e8c45af91b55de943afa9b8.png",[],{"type":18,"tag":26,"props":158,"children":159},{},[160],{"type":24,"value":161},"据 OpenAI 官网，GPT-4o 不仅在文本和代码处理的性能上与GPT-4 Turbo持平，而且在 API 调用上速度更快，价格更是降低了50%。",{"type":18,"tag":26,"props":163,"children":164},{},[165],{"type":18,"tag":90,"props":166,"children":168},{"alt":7,"src":167},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/05/31/17dd10e6b1404dc3abcfddfb3b8f1c4c.png",[],{"type":18,"tag":26,"props":170,"children":171},{},[172],{"type":24,"value":173},"文本测试能力",{"type":18,"tag":26,"props":175,"children":176},{},[177],{"type":18,"tag":90,"props":178,"children":180},{"alt":7,"src":179},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/05/31/0c558dc3f6f24d898097ce9327538833.png",[],{"type":18,"tag":26,"props":182,"children":183},{},[184],{"type":24,"value":185},"与GPT-4对比多语言考试能力",{"type":18,"tag":26,"props":187,"children":188},{},[189],{"type":24,"value":190},"更重要的是，GPT-4o 的视觉理解能力在相关基准上取得了压倒性的胜利。",{"type":18,"tag":26,"props":192,"children":193},{},[194,198],{"type":18,"tag":90,"props":195,"children":197},{"alt":7,"src":196},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/05/31/c9a96db51ff74471978ecd61a8222186.png",[],{"type":24,"value":199}," 在音频方面，GPT-4o 的语音识别（ASR）也比 OpenAI 的语音识别模型 Whisper 性能更佳（越低越好）。",{"type":18,"tag":26,"props":201,"children":202},{},[203,207],{"type":18,"tag":90,"props":204,"children":206},{"alt":7,"src":205},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/05/31/39617ae82ebc45798b46da634ca0b9be.png",[],{"type":24,"value":208}," 总的来说就是文字、语音、图片、视频统统来者不拒，还能实时处理音频和视觉信息超低的延迟，几乎可以做到真人级别的反应速度，甚至还要快。",{"type":18,"tag":26,"props":210,"children":211},{},[212,216],{"type":18,"tag":90,"props":213,"children":215},{"alt":7,"src":214},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/05/31/0aaee0737e754dc189c895b82a8709a2.png",[],{"type":24,"value":217}," 好啦，本期关于ChatGPT-4o的详细介绍就到这里啦，坛主在这里只能用一个词语来形容——extraordinary！我们下期继续不见不散！",{"title":7,"searchDepth":219,"depth":219,"links":220},4,[],"markdown","content:technology-blogs:zh:3143.md","content","technology-blogs/zh/3143.md","technology-blogs/zh/3143","md",1776506126512]