[{"data":1,"prerenderedAt":536},["ShallowReactive",2],{"content-query-oj1NqSZrrc":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":530,"_id":531,"_source":532,"_file":533,"_stem":534,"_extension":535},"/technology-blogs/zh/2680","zh",false,"","MindSpore AI科学计算系列（30）：详解伏羲模型","自从盘古weather[1]在中长期气象预报精度首次超过传统数值方法，业内开始挖掘气象大模型在这个领域的潜力，基于图神经网络的GraphCast[2]在同化数据ERA5上有着10天内预报高准确率的性能表现。","2023-07-31","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/08/02/3647f868b87247159b0a978ff4ea3418.png","technology-blogs","大V博文",{"type":15,"children":16,"toc":527},"root",[17,25,35,40,48,53,58,63,68,73,78,83,88,93,98,106,132,162,170,175,183,206,216,223,249,267,285,293,298,303,308,315,348,353,358,363,368,375,380,388,407,416,423,428,433,438,445,450,480,487,492,497,502,507,512,517,522],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"mindspore-ai科学计算系列30详解伏羲模型",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":28},"p",{},[29],{"type":18,"tag":30,"props":31,"children":32},"strong",{},[33],{"type":24,"value":34},"前言",{"type":18,"tag":26,"props":36,"children":37},{},[38],{"type":24,"value":39},"自从盘古weather[1]在中长期气象预报精度首次超过传统数值方法，业内开始挖掘气象大模型在这个领域的潜力，基于图神经网络的GraphCast[2]在同化数据ERA5上有着10天内预报高准确率的性能表现。本文介绍的伏羲大模型[3]由复旦大学人工智能创新孵化研究院推出，采用了一种级联的模型架构，可以提供15天的全球预报，具有6小时的时间分辨率和0.25°的空间分辨率，同样是利用39年的ECMWF ERA5再分析数据集开发的。在纬度加权均方根误差（RMSE）和异常相关系数（ACC）的性能评估中，其在15天的预报中可以和ECMWF集合平均（EM）的预报结果相媲美，成为第一个实现这一成就的机器学习模型。",{"type":18,"tag":26,"props":41,"children":42},{},[43],{"type":18,"tag":30,"props":44,"children":45},{},[46],{"type":24,"value":47},"背景",{"type":18,"tag":26,"props":49,"children":50},{},[51],{"type":24,"value":52},"ECMWF的高分辨率预报（HRES）被认为是全球最准确的天气预报模型之一。它的水平分辨率为0.1°，具有137个垂直层次，可以提供为期10天的预报。然而，天气预报天生具有不确定性，原因如下：",{"type":18,"tag":26,"props":54,"children":55},{},[56],{"type":24,"value":57},"1. 有限的分辨率：天气预报的准确性受NWP模型分辨率的影响。较粗的分辨率可能无法完全捕捉到较小尺度的天气现象。",{"type":18,"tag":26,"props":59,"children":60},{},[61],{"type":24,"value":62},"2. 对物理过程的近似：NWP模型依赖参数化方法来表示模型分辨率以下尺度上发生的复杂物理过程，这些参数化方法引入了不确定性。",{"type":18,"tag":26,"props":64,"children":65},{},[66],{"type":24,"value":67},"3. 初始状态的小误差会随着时间放大，导致显著的预报差异。",{"type":18,"tag":26,"props":69,"children":70},{},[71],{"type":24,"value":72},"4. 大气的混沌特性：天气是一个混沌系统，意味着微小的变化可能随着时间产生显著影响。这种对初始条件的敏感性增加了预报的不确定性。",{"type":18,"tag":26,"props":74,"children":75},{},[76],{"type":24,"value":77},"5. 预报提前时间的增加：预报提前的时间越长，不确定性就越大。这是由于各种不确定性和误差的累积效应。",{"type":18,"tag":26,"props":79,"children":80},{},[81],{"type":24,"value":82},"为了解决预报不确定性，像ECMWF这样的天气中心运行集合预报系统（EPS）。ECMWF的EPS包括多个预报，这些预报在初始条件和物理参数化上略有变化。通过运行这些集合成员，可以评估可能结果的范围并估计预报的不确定性。",{"type":18,"tag":26,"props":84,"children":85},{},[86],{"type":24,"value":87},"尽管集合预报具有好处，但由于需要运行多个带有扰动条件的模拟，因此计算代价较高。近年来，有越来越多的努力将传统的NWP模型替换为机器学习模型用于天气预报。基于机器学习的天气预报系统相比NWP模型具有几个优势，包括更快的速度和可能通过使用再分析数据进行训练提供比未校准的NWP模型更高的准确性。为了促进不同机器学习模型之间的比较，引入了WeatherBench基准来评估中期天气预报（即3-5天）。WeatherBench通过重新网格化ERA5再分析数据[4]，从0.25°分辨率转换为三种不同的分辨率（5.625°，2.8125°和1.40625°）。",{"type":18,"tag":26,"props":89,"children":90},{},[91],{"type":24,"value":92},"伏羲模型是一种自回归模型，使用前两个时间步的天气参数作为输入，预测下一个时间步的天气参数。模型的时间步长为6小时，通过迭代生成具有不同提前时间的天气预报。由于纯数据驱动的ML模型缺乏物理约束，长期预报容易产生累积误差和不现实的预测。为了解决这个问题，引入自回归的多步损失，类似于4D-Var方法的成本函数，有效减小长期预报的误差。然而，增加自回归步数会导致短期预报的准确性降低，且需要更多的内存和计算资源。",{"type":18,"tag":26,"props":94,"children":95},{},[96],{"type":24,"value":97},"在进行迭代预测时，随着提前时间的增加，误差累积不可避免。此外，单个模型无法在所有提前时间上表现最佳。为了同时在短期和长期预报中获得最佳性能，提出了级联模型架构。该级联模型使用预先训练的伏羲模型，并对特定5天预报时间窗口进行了优化微调，分别为伏羲-短期（0-5天）、伏羲-中期（5-10天）和伏羲-长期（10-15天）。",{"type":18,"tag":26,"props":99,"children":100},{},[101],{"type":18,"tag":30,"props":102,"children":103},{},[104],{"type":24,"value":105},"模型架构",{"type":18,"tag":26,"props":107,"children":108},{},[109,111,116,118,123,125,130],{"type":24,"value":110},"基本的伏羲模型体系结构由三个主要组件组成，如图所示： ",{"type":18,"tag":30,"props":112,"children":113},{},[114],{"type":24,"value":115},"cube embedding",{"type":24,"value":117},"、",{"type":18,"tag":30,"props":119,"children":120},{},[121],{"type":24,"value":122},"U-Transformer",{"type":24,"value":124},"和全连接层。输入数据结合了上层空气和地表变量，并创建了一个维度为",{"type":18,"tag":30,"props":126,"children":127},{},[128],{"type":24,"value":129},"2×70×721×1440",{"type":24,"value":131},"的数据立方体，以两个时间步作为一个step。",{"type":18,"tag":26,"props":133,"children":134},{},[135,137,141,143,148,150,154,156,160],{"type":24,"value":136},"高维输入数据通过联合时空",{"type":18,"tag":30,"props":138,"children":139},{},[140],{"type":24,"value":115},{"type":24,"value":142},"进行维度缩减，转换为",{"type":18,"tag":30,"props":144,"children":145},{},[146],{"type":24,"value":147},"C×180×360",{"type":24,"value":149},"。",{"type":18,"tag":30,"props":151,"children":152},{},[153],{"type":24,"value":115},{"type":24,"value":155},"的主要目的是减少输入数据的时空维度，减少冗余信息。随后，",{"type":18,"tag":30,"props":157,"children":158},{},[159],{"type":24,"value":122},{"type":24,"value":161},"处理嵌入数据，并使用简单的全连接层进行预测，输出首先被重塑为70×720×1440。",{"type":18,"tag":26,"props":163,"children":164},{},[165],{"type":18,"tag":166,"props":167,"children":169},"img",{"alt":7,"src":168},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/08/02/a5bcbce2e43846be9a84c6cec7c2289c.png",[],{"type":18,"tag":26,"props":171,"children":172},{},[173],{"type":24,"value":174},"图1. 伏羲模型整体架构",{"type":18,"tag":26,"props":176,"children":177},{},[178],{"type":18,"tag":30,"props":179,"children":180},{},[181],{"type":24,"value":182},"Cube Embedding",{"type":18,"tag":26,"props":184,"children":185},{},[186,188,192,194,199,200,204],{"type":24,"value":187},"为了减少输入数据的空间和时间维度，并加快训练过程，应用了",{"type":18,"tag":30,"props":189,"children":190},{},[191],{"type":24,"value":115},{"type":24,"value":193},"方法。在盘古Weather模型中也使用了类似的方法，称为",{"type":18,"tag":30,"props":195,"children":196},{},[197],{"type":24,"value":198},"patch embedding",{"type":24,"value":149},{"type":18,"tag":30,"props":201,"children":202},{},[203],{"type":24,"value":198},{"type":24,"value":205},"将图像划分为N × N大小的块，每个块被转换成一个特征向量。",{"type":18,"tag":26,"props":207,"children":208},{},[209,211,215],{"type":24,"value":210},"具体地，空时立方体嵌入采用了一个三维（3D）卷积层，卷积核和步幅分别为2×4×4（相当于T/2×H/2×W/2），输出通道数为C。在空时立方体嵌入之后，采用了层归一化（LayerNorm）来提高训练的稳定性。最终得到的数据立方体的维度是",{"type":18,"tag":30,"props":212,"children":213},{},[214],{"type":24,"value":147},{"type":24,"value":149},{"type":18,"tag":26,"props":217,"children":218},{},[219],{"type":18,"tag":30,"props":220,"children":221},{},[222],{"type":24,"value":122},{"type":18,"tag":26,"props":224,"children":225},{},[226,228,233,235,240,242,247],{"type":24,"value":227},"U-Transformer还包括U-Net模型的下采样和上采样块。下采样块在图中称为Down Block，将数据维度减少为C×90×180，从而最小化自注意力计算的计算和内存需求。Down Block由一个步长为2的3×3 ",{"type":18,"tag":30,"props":229,"children":230},{},[231],{"type":24,"value":232},"2D卷积层",{"type":24,"value":234},"和一个残差块组成，该残差块有两个3×3卷积层，后面跟随一个组归一化（GN）层和一个",{"type":18,"tag":30,"props":236,"children":237},{},[238],{"type":24,"value":239},"Sigmoid加权激活函数",{"type":24,"value":241},"(SiLU)。",{"type":18,"tag":30,"props":243,"children":244},{},[245],{"type":24,"value":246},"SiLU加权激活函数",{"type":24,"value":248},"通过将Sigmoid函数与其输入相乘来计算σ(x)×x。",{"type":18,"tag":26,"props":250,"children":251},{},[252,254,259,261,265],{"type":24,"value":253},"上采样块在图中称为Up Block，它与Down Block使用相同的残差块，同时还包括一个",{"type":18,"tag":30,"props":255,"children":256},{},[257],{"type":24,"value":258},"2D反卷积",{"type":24,"value":260},"，内核为2，步长为2。Up Block将数据大小缩放回",{"type":18,"tag":30,"props":262,"children":263},{},[264],{"type":24,"value":147},{"type":24,"value":266},"。此外，在馈送到Up Block之前，还包括一个跳跃连接，将Down Block的输出与Transformer Block的输出连接起来。",{"type":18,"tag":26,"props":268,"children":269},{},[270,272,277,279,283],{"type":24,"value":271},"中间结构是由48个重复的",{"type":18,"tag":30,"props":273,"children":274},{},[275],{"type":24,"value":276},"Swin Transformer V2",{"type":24,"value":278},"块构建而成，通过使用残差后归一化代替前归一化，缩放余弦注意力代替原始点积自注意力，",{"type":18,"tag":30,"props":280,"children":281},{},[282],{"type":24,"value":276},{"type":24,"value":284},"解决了诸如训练不稳定等训练和应用大规模的Swin Transformer模型会出现几个问题[5]。",{"type":18,"tag":26,"props":286,"children":287},{},[288],{"type":18,"tag":30,"props":289,"children":290},{},[291],{"type":24,"value":292},"模型训练",{"type":18,"tag":26,"props":294,"children":295},{},[296],{"type":24,"value":297},"伏羲模型的训练过程同样是走预训练加微调的线路，与GraphCast方法类似。",{"type":18,"tag":26,"props":299,"children":300},{},[301],{"type":24,"value":302},"1. 单步预训练",{"type":18,"tag":26,"props":304,"children":305},{},[306],{"type":24,"value":307},"通过下一个时间步的监督信息来优化经度加权的L1损失函数:",{"type":18,"tag":26,"props":309,"children":310},{},[311],{"type":18,"tag":166,"props":312,"children":314},{"alt":7,"src":313},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/08/02/253cc4ccebf142ee8a2e81156447ad74.png",[],{"type":18,"tag":26,"props":316,"children":317},{},[318,320,326,328,333,335,340,341,346],{"type":24,"value":319},"其中，",{"type":18,"tag":321,"props":322,"children":323},"em",{},[324],{"type":24,"value":325},"C",{"type":24,"value":327},"，",{"type":18,"tag":321,"props":329,"children":330},{},[331],{"type":24,"value":332},"H",{"type":24,"value":334},"，_W_分别表示通道数和纬度、经度方向上的网格点数。",{"type":18,"tag":321,"props":336,"children":337},{},[338],{"type":24,"value":339},"c",{"type":24,"value":327},{"type":18,"tag":321,"props":342,"children":343},{},[344],{"type":24,"value":345},"i",{"type":24,"value":347},"，_j_分别是变量、纬度和经度坐标的索引，_ai_表示在纬度_i_处的权重，随着纬度增加，_ai_的值逐渐减小。最终对某个变量和位置（纬度和经度坐标）的绝对误差进行求和。",{"type":18,"tag":26,"props":349,"children":350},{},[351],{"type":24,"value":352},"2. 微调级联模型",{"type":18,"tag":26,"props":354,"children":355},{},[356],{"type":24,"value":357},"在预训练后，首先对基础伏羲模型进行微调，以实现从0到5天（0-20个时间步）的每6小时预报的最佳性能。这个微调过程使用自回归策略，产生20个时间步，与GraphCast类似，微调后的模型被称为伏羲-Short。",{"type":18,"tag":26,"props":359,"children":360},{},[361],{"type":24,"value":362},"使用伏羲-Short的权重来初始化伏羲-Medium模型，然后对其进行微调，以实现从5天到10天（21-40个时间步）的最佳预报性能。其中，伏羲-Short模型在第20个时间步（第5天）的输出是伏羲-Medium模型的输入，直接实时进行伏羲-Short模型的在线推理会导致显著的内存消耗并且降低微调速度。因此需要提前在六年（2012-2017年）的数据上计算并将伏羲-Short模型的输出结果缓存到硬盘上。",{"type":18,"tag":26,"props":364,"children":365},{},[366],{"type":24,"value":367},"最后，伏羲-Long应用同样的过程，三者被级联在一起，形成完整的15天预报。级联的设计有助于减小误差的积累并提高长期预报的性能，如图所示。",{"type":18,"tag":26,"props":369,"children":370},{},[371],{"type":18,"tag":166,"props":372,"children":374},{"alt":7,"src":373},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/08/02/cdcd2aaa46d84f7083de13846de8cc17.png",[],{"type":18,"tag":26,"props":376,"children":377},{},[378],{"type":24,"value":379},"图2. 级联模型架构",{"type":18,"tag":26,"props":381,"children":382},{},[383],{"type":18,"tag":30,"props":384,"children":385},{},[386],{"type":24,"value":387},"评估方法与结果分析",{"type":18,"tag":26,"props":389,"children":390},{},[391,393,405],{"type":24,"value":392},"_RMSE_和_ACC_的计算方式如下所示，",{"type":18,"tag":321,"props":394,"children":395},{},[396,398,403],{"type":24,"value":397},"D_代表测试集，",{"type":18,"tag":321,"props":399,"children":400},{},[401],{"type":24,"value":402},"τ_代表预测的提前时间步数，为了比较伏羲模型与基准模型的预测性能，对",{"type":24,"value":404},"(RMSEA-RMSEB)/RMSEB_和",{"type":24,"value":406},"(ACCA-ACCB)/(1-ACCB)_进行可视化。",{"type":18,"tag":26,"props":408,"children":409},{},[410,414],{"type":18,"tag":166,"props":411,"children":413},{"alt":7,"src":412},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/08/02/2dd488f3f72144e49f600c22a832aa8a.png",[],{"type":24,"value":415}," 测试集选用了2018年数据，并选择了每天两个初始化时间（00:00 UTC和12:00 UTC），以产生每6小时间隔的15天预报。为了评估ECMWF HRES和EM模型的性能，研究采用了ECMWF机构应用的验证方法。在这种验证方法中，模型分析数据，即HRES-fc0和ENS-fc0，分别作为HRES和EM模型的\"地面真实值\"。",{"type":18,"tag":26,"props":417,"children":418},{},[419],{"type":18,"tag":166,"props":420,"children":422},{"alt":7,"src":421},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/08/02/9e128d157d75488d934319f700a6f987.png",[],{"type":18,"tag":26,"props":424,"children":425},{},[426],{"type":24,"value":427},"图3. HRES、GraphCast和伏羲的全球平均纬度加权 ACC 和 RMSE 比较",{"type":18,"tag":26,"props":429,"children":430},{},[431],{"type":24,"value":432},"从图中可以看出，FuXi和GraphCast的预测性能明显优于ECMWF HRES。在预测7天内，FuXi和GraphCast的性能相当；然而，超过7天后，FuXi表现出更优越的性能，其在所有变量和预测提前时间上均具有最低的_RMSE_值和最高的_ACC_值。此外，随着提前时间的增加，FuXi的优势变得越来越显著。值得注意的是，FuXi在图中未显示的变量上也胜过ECMWF HRES和GraphCast。",{"type":18,"tag":26,"props":434,"children":435},{},[436],{"type":24,"value":437},"另一方面，ECMWF EM被用作15天预报ACC和RMSE的归一化差异的基准。在0-9天的预测中，FuXi的性能优于ECMWF EM，归一化_ACC_差异为正值，归一化_RMSE_差异为负值。然而，对于超过9天的预测，FuXi的性能略逊于ECMWF EM，结果如下图所示。",{"type":18,"tag":26,"props":439,"children":440},{},[441],{"type":18,"tag":166,"props":442,"children":444},{"alt":7,"src":443},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/08/02/594bcd251002449aa2d15ffe134964ef.png",[],{"type":18,"tag":26,"props":446,"children":447},{},[448],{"type":24,"value":449},"图4. ECMWF EM和伏羲的全球平均纬度加权ACC和RMSE以及归一化ACC和RMSE差异的比较",{"type":18,"tag":26,"props":451,"children":452},{},[453,455,460,465,466,471,473,478],{"type":24,"value":454},"更进一步，通过",{"type":18,"tag":30,"props":456,"children":457},{},[458],{"type":24,"value":459},"FuXi的平均",{"type":18,"tag":321,"props":461,"children":462},{},[463],{"type":24,"value":464},"RMSE",{"type":24,"value":117},{"type":18,"tag":30,"props":467,"children":468},{},[469],{"type":24,"value":470},"ECMWF HRES与FuXi之间的_RMSE_差异",{"type":24,"value":472},"，以及",{"type":18,"tag":30,"props":474,"children":475},{},[476],{"type":24,"value":477},"ECMWF EM与FuXi之间的_RMSE_差异",{"type":24,"value":479},"的空间分布可视化看到，三个预测的空间误差分布相似。最高的_RMSE_值出现在高纬度地区，而相对较小的值出现在中低纬度地区。在陆地上，_RMSE_值较海洋上较高。ECMWF HRES与FuXi之间的_RMSE_差异显示FuXi在大多数网格点上表现优于ECMWF HRES，如红色占优势所示。相比之下，ECMWF EM在大多数区域与FuXi表现相当，如白色占优势所示。",{"type":18,"tag":26,"props":481,"children":482},{},[483],{"type":18,"tag":166,"props":484,"children":486},{"alt":7,"src":485},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/08/02/ec8da8b32ffa48c38d58ac17cb0c82fa.png",[],{"type":18,"tag":26,"props":488,"children":489},{},[490],{"type":24,"value":491},"图5. 伏羲的平均RMSE, ECMWF HRES和伏羲的RMSE差异以及EM和伏羲的RMSE差异在空间图上的表示",{"type":18,"tag":26,"props":493,"children":494},{},[495],{"type":24,"value":496},"最后，尽管通过级联模型的方式，伏羲在15天预报中与ECMWF EM性能相当，但基于机器学习的天气预报方法的一个局限性是它们还不完全是端到端的，因为它们仍然依赖于由传统数值天气预报模型生成的分析数据用于初始条件。因此，开发数据驱动的数据同化方法，利用观测数据为基于机器学习的天气预报系统生成初始条件也是未来的一个趋势，从而构建一个真正的端到端、系统无偏和计算高效的基于机器学习的天气预报系统。",{"type":18,"tag":26,"props":498,"children":499},{},[500],{"type":24,"value":501},"[1] Bi, K., Xie, L., Zhang, H., Chen, X., Gu, X., Tian, Q.: Pangu-Weather:A 3D High-Resolution Model for Fast and Accurate Global Weather Forecast (2022)",{"type":18,"tag":26,"props":503,"children":504},{},[505],{"type":24,"value":506},"[2] Lam, R., Sanchez-Gonzalez, A., Willson, M., Wirnsberger, P., Fortunato,M., Pritzel, A., Ravuri, S., Ewalds, T., Alet, F., Eaton-Rosen, Z., Hu, W.,Merose, A., Hoyer, S., Holland, G., Stott, J., Vinyals, O., Mohamed, S.,",{"type":18,"tag":26,"props":508,"children":509},{},[510],{"type":24,"value":511},"Battaglia, P.: GraphCast: Learning skillful medium-range global weather forecasting (2022)",{"type":18,"tag":26,"props":513,"children":514},{},[515],{"type":24,"value":516},"[3] Chen L, Zhong X, Zhang F, et al. FuXi: A cascade machine learning forecasting system for 15-day global weather forecast (2023)",{"type":18,"tag":26,"props":518,"children":519},{},[520],{"type":24,"value":521},"[4] Hersbach, H., Bell, B., Berrisford, P., Hirahara, S., Hor´anyi, A., Mu˜noz-Sabater, J., Nicolas, J., Peubey, C., Radu, R., Schepers, D., et al.: The era5 global reanalysis. Quarterly Journal of the Royal Meteorological Society146(730), 1999–2049 (2020)",{"type":18,"tag":26,"props":523,"children":524},{},[525],{"type":24,"value":526},"[5] Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., Wei, F., Guo, B.: Swin Transformer V2: Scaling Up Capacity and Resolution (2022)",{"title":7,"searchDepth":528,"depth":528,"links":529},4,[],"markdown","content:technology-blogs:zh:2680.md","content","technology-blogs/zh/2680.md","technology-blogs/zh/2680","md",1776506122760]