[{"data":1,"prerenderedAt":2145},["ShallowReactive",2],{"content-query-5aAAjpkHGA":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":2139,"_id":2140,"_source":2141,"_file":2142,"_stem":2143,"_extension":2144},"/version-updates/en/2617","en",false,"","MindSpore 2.0: Framework Upgraded for Research Innovations and Industry Applications","MindSpore 2.0 is officially released with a framework upgrade to facilitate research innovations and industry applications thanks to the efforts of community developers.","2023-06-20","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/15/3dd2a41cbff14bcfaf32a6916fe73d98.png","version-updates",{"type":14,"children":15,"toc":2137},"root",[16,24,30,35,44,49,57,65,70,77,92,147,154,167,175,180,193,198,203,208,213,218,223,228,232,237,242,247,252,257,262,267,272,277,282,287,292,297,302,307,312,317,322,327,332,336,340,345,350,355,360,365,370,375,379,384,389,394,399,404,409,420,428,433,438,443,447,452,456,461,465,470,475,480,485,490,495,500,505,509,513,526,534,539,544,549,554,559,564,569,574,579,584,589,594,599,604,609,614,625,633,638,643,648,653,661,666,673,684,692,697,702,711,718,726,736,743,754,762,770,775,783,788,799,807,812,823,831,836,847,855,860,871,879,884,895,903,908,913,926,934,942,947,952,957,962,967,972,977,982,987,992,996,1001,1006,1011,1016,1021,1026,1031,1036,1041,1053,1058,1063,1070,1081,1089,1102,1107,1115,1123,1131,1139,1144,1149,1154,1159,1164,1169,1173,1178,1183,1187,1192,1197,1201,1206,1211,1216,1220,1225,1230,1234,1239,1244,1248,1253,1258,1263,1267,1272,1277,1281,1286,1291,1295,1300,1305,1309,1314,1319,1323,1328,1333,1338,1343,1348,1352,1357,1362,1366,1371,1376,1381,1386,1391,1396,1401,1406,1410,1415,1420,1425,1429,1440,1448,1486,1493,1498,1506,1513,1522,1529,1538,1545,1554,1561,1566,1577,1585,1593,1604,1612,1623,1631,1642,1650,1658,1663,1675,1683,1691,1710,1718,1730,1742,1747,1752,1757,1762,1767,1772,1777,1782,1787,1795,1800,1808,1813,1818,1823,1830,1835,1842,1853,1861,1869,1874,1882,1887,1894,1905,1913,1921,1926,1933,1938,1946,1951,1959,1964,1972,1977,1985,1990,2001,2009,2014,2021,2026,2031,2043,2051,2056,2061,2066,2073,2078,2085,2090,2097,2102,2107,2114,2121,2126],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"mindspore-20-framework-upgraded-for-research-innovations-and-industry-applications",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"MindSpore 1.0 launched the industry's first all-scenario AI framework. MindSpore 1.5 introduced native support for foundation models. Now, MindSpore 2.0, thanks to the efforts of community developers, is officially released with a framework upgrade to facilitate research innovations and industry applications.",{"type":17,"tag":25,"props":31,"children":32},{},[33],{"type":23,"value":34},"MindSpore 2.0 supports multi-dimensional hybrid automatic parallelism and provides a foundation model suite to support one-stop training, building the best training and inference platform for foundation models. In terms of usability improvement, this version provides many out-of-the-box model suites and combines dynamic and static graphs flexibly and efficiently. In addition, this version delivers an AI + scientific computing suite with cutting-edge features to facilitate technology innovations. Now, let's take a look at the key features of MindSpore 2.0.",{"type":17,"tag":25,"props":36,"children":37},{},[38],{"type":17,"tag":39,"props":40,"children":41},"strong",{},[42],{"type":23,"value":43},"Training and Inference of Foundation Models",{"type":17,"tag":25,"props":45,"children":46},{},[47],{"type":23,"value":48},"For foundation models, MindSpore 2.0 offers one-stop training and inference capabilities. The scale of supported models is expanded, and training and inference performance are all enhanced. A suite is also provided to reduce training costs.",{"type":17,"tag":25,"props":50,"children":51},{},[52],{"type":17,"tag":53,"props":54,"children":56},"img",{"alt":7,"src":55},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/636f0d6548944ee292e324f3a61ac988.png",[],{"type":17,"tag":25,"props":58,"children":59},{},[60],{"type":17,"tag":39,"props":61,"children":62},{},[63],{"type":23,"value":64},"1. MindSpore supports foundation models natively with industry-leading multi-dimensional hybrid parallelism",{"type":17,"tag":25,"props":66,"children":67},{},[68],{"type":23,"value":69},"MindSpore supports various parallel modes such as data parallelism, model parallelism, mixture of experts (MoE), pipeline parallelism, optimizer parallelism, and heterogeneous training, and can natively perform foundation model training. It is one of the best frameworks for foundation model training in the industry. Based on MindSpore, manufacturers have trained 22+ foundation models whose numbers of parameters range from billions to trillions. In MindSpore 2.0, support for LLaMA, BLOOM, GLM, GPT and other billion-parameter models is added.",{"type":17,"tag":25,"props":71,"children":72},{},[73],{"type":17,"tag":53,"props":74,"children":76},{"alt":7,"src":75},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/ffa5330132fa4fbfaa07098e139f8696.png",[],{"type":17,"tag":25,"props":78,"children":79},{},[80,85,87],{"type":17,"tag":39,"props":81,"children":82},{},[83],{"type":23,"value":84},"2.",{"type":23,"value":86}," ",{"type":17,"tag":39,"props":88,"children":89},{},[90],{"type":23,"value":91},"Unified inference in all scenarios greatly improves deployment efficiency and inference performance.",{"type":17,"tag":93,"props":94,"children":95},"ol",{},[96,107,117,127,137],{"type":17,"tag":97,"props":98,"children":99},"li",{},[100,105],{"type":17,"tag":39,"props":101,"children":102},{},[103],{"type":23,"value":104},"Unified backend",{"type":23,"value":106},": Ascend AI Processors, GPUs, and CPUs are all supported by the same release package, reducing deployment costs.",{"type":17,"tag":97,"props":108,"children":109},{},[110,115],{"type":17,"tag":39,"props":111,"children":112},{},[113],{"type":23,"value":114},"Native support for MindIR",{"type":23,"value":116},": A model trained in MindSpore can be directly used for inference by using the MindSpore IR (MindIR), shortening the rollout period.",{"type":17,"tag":97,"props":118,"children":119},{},[120,125],{"type":17,"tag":39,"props":121,"children":122},{},[123],{"type":23,"value":124},"Unified model format",{"type":23,"value":126},": MindIRs can be exported from third-party ecosystems (TensorFlow, ONNX, and Caffe) to for unified inference, reducing online maintenance workload.",{"type":17,"tag":97,"props":128,"children":129},{},[130,135],{"type":17,"tag":39,"props":131,"children":132},{},[133],{"type":23,"value":134},"Diversified optimization strategies",{"type":23,"value":136},": Inference optimization strategies such as operator fusion, constant folding, format conversion, and redundant operator elimination are supported, and advanced instruction optimization operators such as AVX512 are supported to improve model inference performance.",{"type":17,"tag":97,"props":138,"children":139},{},[140,145],{"type":17,"tag":39,"props":141,"children":142},{},[143],{"type":23,"value":144},"Ultimate optimization for foundation models",{"type":23,"value":146},": In-depth graph kernel fusion is performed for mainstream Transformer networks to reduce model memory usage and greatly improve inference performance.",{"type":17,"tag":25,"props":148,"children":149},{},[150],{"type":17,"tag":53,"props":151,"children":153},{"alt":7,"src":152},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/e2c6031735184a53a14866f6b17c98bd.png",[],{"type":17,"tag":25,"props":155,"children":156},{},[157,159],{"type":23,"value":158},"MindSpore Lite document: ",{"type":17,"tag":160,"props":161,"children":165},"a",{"href":162,"rel":163},"https://mindspore.cn/lite/docs/en/r2.0/index.html",[164],"nofollow",[166],{"type":23,"value":162},{"type":17,"tag":25,"props":168,"children":169},{},[170],{"type":17,"tag":39,"props":171,"children":172},{},[173],{"type":23,"value":174},"3. MindSpore Transformers foundation model training suite provides built-in SOTA models for modularized development process.",{"type":17,"tag":25,"props":176,"children":177},{},[178],{"type":23,"value":179},"MindSpore Transformers is a full-process development suite for the entire lifecycle of a foundation model, covering training, fine-tuning, evaluation, and inference processes. It provides fast development capabilities and 10+ pre-trained SOTA foundation models for popular fields such as CV, NLP, and AIGC, and can deploy models to AICC in one click.",{"type":17,"tag":93,"props":181,"children":182},{},[183,188],{"type":17,"tag":97,"props":184,"children":185},{},[186],{"type":23,"value":187},"With the Trainer and pipeline interfaces, you can train a model using two lines of code and start inference using three lines of code. Models can be automatically downloaded.",{"type":17,"tag":97,"props":189,"children":190},{},[191],{"type":23,"value":192},"Popular models such as GPT, GLM, BLOOM, and LLaMA are provided with support for various downstream fine-tuning tasks. The models deliver the similar accuracies as SOTA.",{"type":17,"tag":25,"props":194,"children":195},{},[196],{"type":23,"value":197},"Model",{"type":17,"tag":25,"props":199,"children":200},{},[201],{"type":23,"value":202},"Supported Tasks",{"type":17,"tag":25,"props":204,"children":205},{},[206],{"type":23,"value":207},"Supported Models",{"type":17,"tag":25,"props":209,"children":210},{},[211],{"type":23,"value":212},"LLAMA",{"type":17,"tag":25,"props":214,"children":215},{},[216],{"type":23,"value":217},"text generation",{"type":17,"tag":25,"props":219,"children":220},{},[221],{"type":23,"value":222},"LLaMA7B/LLaMA13B/LLaMA65B",{"type":17,"tag":25,"props":224,"children":225},{},[226],{"type":23,"value":227},"BLOOM",{"type":17,"tag":25,"props":229,"children":230},{},[231],{"type":23,"value":217},{"type":17,"tag":25,"props":233,"children":234},{},[235],{"type":23,"value":236},"bloom-7b/bloom-65B/Bloom-175B",{"type":17,"tag":25,"props":238,"children":239},{},[240],{"type":23,"value":241},"GLM",{"type":17,"tag":25,"props":243,"children":244},{},[245],{"type":23,"value":246},"text_generation, question_answering, translation",{"type":17,"tag":25,"props":248,"children":249},{},[250],{"type":23,"value":251},"glm_6b",{"type":17,"tag":25,"props":253,"children":254},{},[255],{"type":23,"value":256},"GPT",{"type":17,"tag":25,"props":258,"children":259},{},[260],{"type":23,"value":261},"text_generation",{"type":17,"tag":25,"props":263,"children":264},{},[265],{"type":23,"value":266},"gpt2_small/gpt2_13b/gpt2_52b",{"type":17,"tag":25,"props":268,"children":269},{},[270],{"type":23,"value":271},"BERT",{"type":17,"tag":25,"props":273,"children":274},{},[275],{"type":23,"value":276},"masked_language_modeling",{"type":17,"tag":25,"props":278,"children":279},{},[280],{"type":23,"value":281},"text_classification",{"type":17,"tag":25,"props":283,"children":284},{},[285],{"type":23,"value":286},"token_classification",{"type":17,"tag":25,"props":288,"children":289},{},[290],{"type":23,"value":291},"question_answering",{"type":17,"tag":25,"props":293,"children":294},{},[295],{"type":23,"value":296},"bert_base_uncased",{"type":17,"tag":25,"props":298,"children":299},{},[300],{"type":23,"value":301},"txtcls_bert_base_uncased/txtcls_bert_base_uncased_mnli",{"type":17,"tag":25,"props":303,"children":304},{},[305],{"type":23,"value":306},"tokcls_bert_base_chinese/tokcls_bert_base_chinese_cluener",{"type":17,"tag":25,"props":308,"children":309},{},[310],{"type":23,"value":311},"qa_bert_base_uncased/qa_bert_base_chinese_uncased",{"type":17,"tag":25,"props":313,"children":314},{},[315],{"type":23,"value":316},"T5",{"type":17,"tag":25,"props":318,"children":319},{},[320],{"type":23,"value":321},"translation",{"type":17,"tag":25,"props":323,"children":324},{},[325],{"type":23,"value":326},"t5_small",{"type":17,"tag":25,"props":328,"children":329},{},[330],{"type":23,"value":331},"GPT2",{"type":17,"tag":25,"props":333,"children":334},{},[335],{"type":23,"value":261},{"type":17,"tag":25,"props":337,"children":338},{},[339],{"type":23,"value":266},{"type":17,"tag":25,"props":341,"children":342},{},[343],{"type":23,"value":344},"MAE",{"type":17,"tag":25,"props":346,"children":347},{},[348],{"type":23,"value":349},"masked_image_modeling",{"type":17,"tag":25,"props":351,"children":352},{},[353],{"type":23,"value":354},"mae_vit_base_p16",{"type":17,"tag":25,"props":356,"children":357},{},[358],{"type":23,"value":359},"VIT",{"type":17,"tag":25,"props":361,"children":362},{},[363],{"type":23,"value":364},"image_classification",{"type":17,"tag":25,"props":366,"children":367},{},[368],{"type":23,"value":369},"vit_base_p16",{"type":17,"tag":25,"props":371,"children":372},{},[373],{"type":23,"value":374},"Swin",{"type":17,"tag":25,"props":376,"children":377},{},[378],{"type":23,"value":364},{"type":17,"tag":25,"props":380,"children":381},{},[382],{"type":23,"value":383},"swin_base_p4w7",{"type":17,"tag":25,"props":385,"children":386},{},[387],{"type":23,"value":388},"CLIP",{"type":17,"tag":25,"props":390,"children":391},{},[392],{"type":23,"value":393},"contrastive_language_image_pretrain,",{"type":17,"tag":25,"props":395,"children":396},{},[397],{"type":23,"value":398},"zero_shot_image_classification",{"type":17,"tag":25,"props":400,"children":401},{},[402],{"type":23,"value":403},"clip_vit_b_32/clip_vit_b_16/clip_vit_l_14",{"type":17,"tag":25,"props":405,"children":406},{},[407],{"type":23,"value":408},"clip_vit_l_14@336",{"type":17,"tag":25,"props":410,"children":411},{},[412,414],{"type":23,"value":413},"MindSpore Transformers: ",{"type":17,"tag":160,"props":415,"children":418},{"href":416,"rel":417},"https://gitee.com/mindspore/mindformers",[164],[419],{"type":23,"value":416},{"type":17,"tag":25,"props":421,"children":422},{},[423],{"type":17,"tag":39,"props":424,"children":425},{},[426],{"type":23,"value":427},"4. MindRLHF enables training of billion-parameter models with merely 20 lines of code, helping you quickly build your own conversational AI.",{"type":17,"tag":25,"props":429,"children":430},{},[431],{"type":23,"value":432},"MindRLHF leverages MindSpore's parallel training, inference, and deployment capabilities to help you quickly train billion-parameter models through reinforcement learning from human feedback (RLHF) and deploy the model as your own conversational AI.",{"type":17,"tag":25,"props":434,"children":435},{},[436],{"type":23,"value":437},"MindRLHF covers three phases of RLHF: pre-trained model training, reward model training, and reinforcement learning training. By integrating various model libraries of MindSpore Transformers, MindRLHF provides fine-tuning processes for base models, such as PanGu-Alpha (2.6B and 13B), GPT-2, LLaMA, and BLOOM. In addition, parallel interfaces inherited from MindSpore allow you to deploy foundation models of different scales to the training cluster in one click.",{"type":17,"tag":25,"props":439,"children":440},{},[441],{"type":23,"value":442},"Models supported by MindRLHF are as follows:",{"type":17,"tag":25,"props":444,"children":445},{},[446],{"type":23,"value":197},{"type":17,"tag":25,"props":448,"children":449},{},[450],{"type":23,"value":451},"PanGu-Alpha",{"type":17,"tag":25,"props":453,"children":454},{},[455],{"type":23,"value":451},{"type":17,"tag":25,"props":457,"children":458},{},[459],{"type":23,"value":460},"GPT-2",{"type":17,"tag":25,"props":462,"children":463},{},[464],{"type":23,"value":227},{"type":17,"tag":25,"props":466,"children":467},{},[468],{"type":23,"value":469},"Scale",{"type":17,"tag":25,"props":471,"children":472},{},[473],{"type":23,"value":474},"2.6B",{"type":17,"tag":25,"props":476,"children":477},{},[478],{"type":23,"value":479},"13B",{"type":17,"tag":25,"props":481,"children":482},{},[483],{"type":23,"value":484},"124M",{"type":17,"tag":25,"props":486,"children":487},{},[488],{"type":23,"value":489},"7B",{"type":17,"tag":25,"props":491,"children":492},{},[493],{"type":23,"value":494},"Minimum hardware requirements (number of Ascend 910 cards)",{"type":17,"tag":25,"props":496,"children":497},{},[498],{"type":23,"value":499},"1",{"type":17,"tag":25,"props":501,"children":502},{},[503],{"type":23,"value":504},"16",{"type":17,"tag":25,"props":506,"children":507},{},[508],{"type":23,"value":499},{"type":17,"tag":25,"props":510,"children":511},{},[512],{"type":23,"value":504},{"type":17,"tag":25,"props":514,"children":515},{},[516,518,524],{"type":23,"value":517},"More models, such as LLaMA and GLM will be supported in the future. For more information, see ",{"type":17,"tag":160,"props":519,"children":522},{"href":520,"rel":521},"https://github.com/mindspore-lab/mindrlhf",[164],[523],{"type":23,"value":520},{"type":23,"value":525},".",{"type":17,"tag":25,"props":527,"children":528},{},[529],{"type":17,"tag":39,"props":530,"children":531},{},[532],{"type":23,"value":533},"5. MindPet fine-tunes foundation models with fewer parameters, saving computing and storage resources to support various tasks.",{"type":17,"tag":25,"props":535,"children":536},{},[537],{"type":23,"value":538},"MindPet (pet stands for parameter-efficient tuning) is a suite that is developed based on the MindSpore AI convergence framework for fine-tuning foundation models with fewer parameters. The suite provides easy-to-use APIs and use cases for you to quickly get started. Currently, six popular fine-tuning algorithms in the industry and two types of common graph operation interfaces are provided. It is worth mentioning that MindPet also allows you to freeze specified structures on the network based on fine-tuning algorithms or module names. In addition, an interface is provided to save only trainable parameters of the fine-tuning algorithms, allowing you to generate a very small CKPT file.",{"type":17,"tag":25,"props":540,"children":541},{},[542],{"type":23,"value":543},"MindPet helps you efficiently tune foundation models with fewer parameters, significantly reducing computing and storage memory usage and training time, and achieving good model performance in a resource-limited environment. The following fine-tuning algorithms are provided:",{"type":17,"tag":25,"props":545,"children":546},{},[547],{"type":23,"value":548},"Algorithm",{"type":17,"tag":25,"props":550,"children":551},{},[552],{"type":23,"value":553},"Paper",{"type":17,"tag":25,"props":555,"children":556},{},[557],{"type":23,"value":558},"LoRA",{"type":17,"tag":25,"props":560,"children":561},{},[562],{"type":23,"value":563},"LoRA: Low-Rank Adaptation of Large Language Models",{"type":17,"tag":25,"props":565,"children":566},{},[567],{"type":23,"value":568},"PrefixTuning",{"type":17,"tag":25,"props":570,"children":571},{},[572],{"type":23,"value":573},"Prefix-Tuning: Optimizing Continuous Prompts for Generation",{"type":17,"tag":25,"props":575,"children":576},{},[577],{"type":23,"value":578},"Adapter",{"type":17,"tag":25,"props":580,"children":581},{},[582],{"type":23,"value":583},"Parameter-Efficient Transfer Learning for NLP",{"type":17,"tag":25,"props":585,"children":586},{},[587],{"type":23,"value":588},"LowRankAdapter",{"type":17,"tag":25,"props":590,"children":591},{},[592],{"type":23,"value":593},"Compacter: Efficient low-rank hypercom plex adapter layers",{"type":17,"tag":25,"props":595,"children":596},{},[597],{"type":23,"value":598},"BitFit",{"type":17,"tag":25,"props":600,"children":601},{},[602],{"type":23,"value":603},"BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models",{"type":17,"tag":25,"props":605,"children":606},{},[607],{"type":23,"value":608},"R_Drop",{"type":17,"tag":25,"props":610,"children":611},{},[612],{"type":23,"value":613},"R-Drop: Regularized Dropout for Neural Networks",{"type":17,"tag":25,"props":615,"children":616},{},[617,619],{"type":23,"value":618},"MindPet: ",{"type":17,"tag":160,"props":620,"children":623},{"href":621,"rel":622},"https://gitee.com/mindspore-lab/mindpet",[164],[624],{"type":23,"value":621},{"type":17,"tag":25,"props":626,"children":627},{},[628],{"type":17,"tag":39,"props":629,"children":630},{},[631],{"type":23,"value":632},"6. MindRec supports TB-level recommendation model training with newly added online training and dynamic feature capabilities, enabling real-time model updates and rollouts.",{"type":17,"tag":25,"props":634,"children":635},{},[636],{"type":23,"value":637},"The industry is facing two challenges as recommendation services continue to grow in the size:",{"type":17,"tag":25,"props":639,"children":640},{},[641],{"type":23,"value":642},"1. The model size exceeds hundreds of gigabytes and even reaches terabytes. The storage, training, and inference of large-scale feature vectors need to be handled.",{"type":17,"tag":25,"props":644,"children":645},{},[646],{"type":23,"value":647},"2. Models are trained on users' real-time behavior data in an incremental manner and updated dynamically. As features may emerge or disappear during training, storing feature vectors in a dense manner is inconvenient to record dynamic changes of features.",{"type":17,"tag":25,"props":649,"children":650},{},[651],{"type":23,"value":652},"To address these problems, MindRec 0.2 provides large-scale recommendation model training capabilities to train TB-level recommendation models using a single card. In addition, online training and dynamic feature capabilities are added to implement minute-level, end-to-end incremental model training and updates. These capabilities effectively support real-time model updates and rollouts.",{"type":17,"tag":25,"props":654,"children":655},{},[656],{"type":17,"tag":39,"props":657,"children":658},{},[659],{"type":23,"value":660},"6.2 TB-level model training on a single card",{"type":17,"tag":25,"props":662,"children":663},{},[664],{"type":23,"value":665},"MindRec trains ultra-large-scale recommendation network models using distributed feature caching, which is based on automatic parallelism. As shown in the following figure, multi-level feature caching (device - local host - remote host - SSD) is used to implement layer-by-layer storage separation and expansion of feature vectors, enabling TB-level models to be trained on a single acceleration card.",{"type":17,"tag":25,"props":667,"children":668},{},[669],{"type":17,"tag":53,"props":670,"children":672},{"alt":7,"src":671},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/3c6d0d2a8e6047859318cdfa5b24b333.png",[],{"type":17,"tag":25,"props":674,"children":675},{},[676,678],{"type":23,"value":677},"Usage example: ",{"type":17,"tag":160,"props":679,"children":682},{"href":680,"rel":681},"https://github.com/mindspore-lab/mindrec/tree/r0.2/models/wide_deep/",[164],[683],{"type":23,"value":680},{"type":17,"tag":25,"props":685,"children":686},{},[687],{"type":17,"tag":39,"props":688,"children":689},{},[690],{"type":23,"value":691},"6.3 Online training",{"type":17,"tag":25,"props":693,"children":694},{},[695],{"type":23,"value":696},"MindRec provides online training and offline training. Its end-to-end online training solution supports Python user programming expression and incremental model import and export.",{"type":17,"tag":25,"props":698,"children":699},{},[700],{"type":23,"value":701},"Usage example:",{"type":17,"tag":25,"props":703,"children":704},{},[705],{"type":17,"tag":160,"props":706,"children":709},{"href":707,"rel":708},"https://github.com/mindspore-lab/mindrec/blob/r0.2/docs/online_learning/online_learning.md",[164],[710],{"type":23,"value":707},{"type":17,"tag":25,"props":712,"children":713},{},[714],{"type":17,"tag":53,"props":715,"children":717},{"alt":7,"src":716},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/b8074bb867a94a329e4a86f572a43602.png",[],{"type":17,"tag":25,"props":719,"children":720},{},[721],{"type":17,"tag":39,"props":722,"children":723},{},[724],{"type":23,"value":725},"6.4 Dynamic features",{"type":17,"tag":25,"props":727,"children":728},{},[729,731],{"type":23,"value":730},"Because features often change during continuous training, the system must be able to add and delete features. The MapParameter data type of MindSpore is used to express the hash type with support for feature addition and deletion and incremental training. ",{"type":17,"tag":39,"props":732,"children":733},{},[734],{"type":23,"value":735},"Computing is performed on the device side to fully leverage the acceleration capabilities of the hardware, achieving higher performance than in a heterogeneous solution where the dynamic feature computing layer is on the host side.",{"type":17,"tag":25,"props":737,"children":738},{},[739],{"type":17,"tag":53,"props":740,"children":742},{"alt":7,"src":741},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/25fdef08e3cb49c884e702c366e6f35e.png",[],{"type":17,"tag":25,"props":744,"children":745},{},[746,748],{"type":23,"value":747},"MindRec: ",{"type":17,"tag":160,"props":749,"children":752},{"href":750,"rel":751},"https://github.com/mindspore-lab/mindrec",[164],[753],{"type":23,"value":750},{"type":17,"tag":25,"props":755,"children":756},{},[757],{"type":17,"tag":39,"props":758,"children":759},{},[760],{"type":23,"value":761},"Continuous Usability Improvement",{"type":17,"tag":25,"props":763,"children":764},{},[765],{"type":17,"tag":39,"props":766,"children":767},{},[768],{"type":23,"value":769},"7. The development suite integrates easy-to-use interfaces and cutting-edge algorithm models to facilitate AI development and innovation.",{"type":17,"tag":25,"props":771,"children":772},{},[773],{"type":23,"value":774},"MindSpore works with universities such as University of Science and Technology of China, Xi'an Jiaotong University, and Xidian University to build AI suites in fields such as computer vision (CV), natural language processing (NLP), audio, optical character recognition (OCR), and YOLO, integrating a large number of mainstream and cutting-edge algorithm models. These suites unify interface modules and reduce learning and development costs, allowing users to quickly develop and apply different deep learning models to solve specific problems.",{"type":17,"tag":25,"props":776,"children":777},{},[778],{"type":17,"tag":39,"props":779,"children":780},{},[781],{"type":23,"value":782},"7.1 MindCV",{"type":17,"tag":25,"props":784,"children":785},{},[786],{"type":23,"value":787},"MindCV is an open source CV suite based on MindSpore. It provides easy-to-use module interfaces (components such as data augmentation, model building, and optimizer) to further simplify the model building and training processes. With the Ascend+MindSpore solution, features such as hybrid precision, data sinking, and training strategies accelerate model training by up to 3 times while improving accuracy by up to 2%, facilitating CV application and innovation.",{"type":17,"tag":25,"props":789,"children":790},{},[791,793],{"type":23,"value":792},"MindCV: ",{"type":17,"tag":160,"props":794,"children":797},{"href":795,"rel":796},"https://github.com/mindspore-lab/mindcv",[164],[798],{"type":23,"value":795},{"type":17,"tag":25,"props":800,"children":801},{},[802],{"type":17,"tag":39,"props":803,"children":804},{},[805],{"type":23,"value":806},"7.2 MindNLP",{"type":17,"tag":25,"props":808,"children":809},{},[810],{"type":23,"value":811},"MindNLP is an all-domain NLP suite. It provides a large number of practical modules, such as classic datasets, classic model structures, and Transformer-based models, for each sub-domain of NLP. Built-in datasets include public datasets of common NLP sub-domains, such as machine translation, question answering, sequence labeling, text classification, and text generation. Huggingface weights can be imported. Pre-trained models for Chinese, such as CPM and GLM, are supported. MindNLP provides simple common NLP model structures, such as Seq2seqModel, Seq2VecModel, and PretrainedModel. Models can be quickly constructed using several lines of code.",{"type":17,"tag":25,"props":813,"children":814},{},[815,817],{"type":23,"value":816},"MindNLP: ",{"type":17,"tag":160,"props":818,"children":821},{"href":819,"rel":820},"https://github.com/mindspore-lab/mindnlp",[164],[822],{"type":23,"value":819},{"type":17,"tag":25,"props":824,"children":825},{},[826],{"type":17,"tag":39,"props":827,"children":828},{},[829],{"type":23,"value":830},"7.3 MindAudio",{"type":17,"tag":25,"props":832,"children":833},{},[834],{"type":23,"value":835},"MindAudio is an audio processing suite and algorithm library based on MindSpore. It provides common audio data processing interfaces and helps quickly set up and train audio deep learning algorithms to improve audio algorithm development efficiency and lower the threshold for audio algorithm development. MindAudio provides 50+ common data processing APIs (such as STFT, Spectrogram, and FBank), 5+ pre-trained models (DeepSpeech2 and Conformer for ASR tasks, FastSpeech2 and WaveGrad for TTS tasks, and voiceprint model ECAPA-TDNN). In addition, common dataset preprocessing APIs (AISHELL, LibriSpeech, VoxCeleb, and LJSpeech) are provided.",{"type":17,"tag":25,"props":837,"children":838},{},[839,841],{"type":23,"value":840},"MindAudio: ",{"type":17,"tag":160,"props":842,"children":845},{"href":843,"rel":844},"https://github.com/mindspore-lab/mindaudio",[164],[846],{"type":23,"value":843},{"type":17,"tag":25,"props":848,"children":849},{},[850],{"type":17,"tag":39,"props":851,"children":852},{},[853],{"type":23,"value":854},"7.4 MindOCR",{"type":17,"tag":25,"props":856,"children":857},{},[858],{"type":23,"value":859},"MindOCR is an OCR algorithm suite based on MindSpore. It provides simple APIs and mainstream SOTA models such as DBNet, DBNet++, CRNN, and SVTR, and supports inference using third-party models (ONNX). Pipelines are used to accelerate end-to-end inference, delivering 20% higher performance than open source projects in the industry. Simple and easy-to-use APIs and categorized model components cover the entire OCR model development process. You can flexibly set up and configure your own OCR models to enrich OCR applications.",{"type":17,"tag":25,"props":861,"children":862},{},[863,865],{"type":23,"value":864},"MindOCR: ",{"type":17,"tag":160,"props":866,"children":869},{"href":867,"rel":868},"https://github.com/mindspore-lab/mindocr",[164],[870],{"type":23,"value":867},{"type":17,"tag":25,"props":872,"children":873},{},[874],{"type":17,"tag":39,"props":875,"children":876},{},[877],{"type":23,"value":878},"7.5 MindYOLO",{"type":17,"tag":25,"props":880,"children":881},{},[882],{"type":23,"value":883},"MindYOLO is a suite of YOLO series algorithms. The suite unifies the implementation of various YOLO algorithm modules and provides common module APIs for data processing, model building, and optimizers to simplify model building and training processes. Currently, MindYOLO provides 6 basic models including YOLOv3, v4, v5, v7, v8, and YOLOX, which can be used for quick reproduction and migration.",{"type":17,"tag":25,"props":885,"children":886},{},[887,889],{"type":23,"value":888},"MindYOLO: ",{"type":17,"tag":160,"props":890,"children":893},{"href":891,"rel":892},"https://github.com/mindspore-lab/mindyolo",[164],[894],{"type":23,"value":891},{"type":17,"tag":25,"props":896,"children":897},{},[898],{"type":17,"tag":39,"props":899,"children":900},{},[901],{"type":23,"value":902},"8. The functionality and performance of dynamic graph mode are enhanced. Basic data types can be returned in static graph mode.",{"type":17,"tag":25,"props":904,"children":905},{},[906],{"type":23,"value":907},"Starting with MindSpore 2.0, dynamic graph (PyNative) mode has become the default mode of MindSpore. The network performance in dynamic graph mode is improved through multi-level pipelines. In the dynamic shape scenario, the automatic differential implementation is optimized, and the compilation-free operator capability of CANN is enabled, greatly reducing the reverse graph construction and operator compilation overhead. Currently, programming of various dynamic shape networks, such as voice, recommendation, and CV networks, is supported. The performance will be continuously optimized in the future.",{"type":17,"tag":25,"props":909,"children":910},{},[911],{"type":23,"value":912},"On the other hand, data types other than tensors and tuples were not well supported in static graph mode, and basic data types such as list, dict, scalar, and none could not be returned. In MindSpore 2.0, the syntax support is extended by the JIT Fallback feature. The top-level graph can return basic types (list, dict, scalar, and none) to better support Python syntax.",{"type":17,"tag":25,"props":914,"children":915},{},[916,918,925],{"type":23,"value":917},"For details, see ",{"type":17,"tag":160,"props":919,"children":922},{"href":920,"rel":921},"https://www.mindspore.cn/docs/en/r2.0/design/dynamic_graph_and_static_graph.html%23jit-fallback",[164],[923],{"type":23,"value":924},"https://www.mindspore.cn/docs/en/r2.0/design/dynamic_graph_and_static_graph.html#jit-fallback",{"type":23,"value":525},{"type":17,"tag":25,"props":927,"children":928},{},[929],{"type":17,"tag":39,"props":930,"children":931},{},[932],{"type":23,"value":933},"9. Full Support for Functional + Object-Oriented Programming Continuously Improves Network Expression Capabilities",{"type":17,"tag":25,"props":935,"children":936},{},[937],{"type":17,"tag":39,"props":938,"children":939},{},[940],{"type":23,"value":941},"9.1 Just-in-time compilation can be enabled with one line of code, improving usability.",{"type":17,"tag":25,"props":943,"children":944},{},[945],{"type":23,"value":946},"Current deep learning frameworks in the industry struggle to balance coding efficiency and execution performance. To address this issue, MindSpore proposes a new paradigm of functional + object-oriented programming.",{"type":17,"tag":25,"props":948,"children":949},{},[950],{"type":23,"value":951},"This new paradigm provides more free low-level interfaces, which makes the code simpler and easier to understand, improves usability, and improves usability. In high-order differential and scientific computing scenarios, this fusion programming paradigm can implement mathematical expressions more easily than the object-oriented programming paradigm, such as in PyTorch. Compared with pure functional programming, such as in JAX and functorch, the fusion programming paradigm provides simpler functional expressions. MindSpore uses the same automatic differential mechanism in AI and scientific computing scenarios and leverages expressions of fusion programming to bridge the gap between frameworks of different programming paradigms in the industry.",{"type":17,"tag":25,"props":953,"children":954},{},[955],{"type":23,"value":956},"AI+scientific computing frameworks in the industry",{"type":17,"tag":25,"props":958,"children":959},{},[960],{"type":23,"value":961},"PyTorch+functorch",{"type":17,"tag":25,"props":963,"children":964},{},[965],{"type":23,"value":966},"JAX+(Haiku/Flax)",{"type":17,"tag":25,"props":968,"children":969},{},[970],{"type":23,"value":971},"MindSpore",{"type":17,"tag":25,"props":973,"children":974},{},[975],{"type":23,"value":976},"Solution",{"type":17,"tag":25,"props":978,"children":979},{},[980],{"type":23,"value":981},"AI-centric",{"type":17,"tag":25,"props":983,"children":984},{},[985],{"type":23,"value":986},"Basic framework + suite",{"type":17,"tag":25,"props":988,"children":989},{},[990],{"type":23,"value":991},"Science-Centric",{"type":17,"tag":25,"props":993,"children":994},{},[995],{"type":23,"value":986},{"type":17,"tag":25,"props":997,"children":998},{},[999],{"type":23,"value":1000},"DualCore(AI-Numerical)",{"type":17,"tag":25,"props":1002,"children":1003},{},[1004],{"type":23,"value":1005},"Framework",{"type":17,"tag":25,"props":1007,"children":1008},{},[1009],{"type":23,"value":1010},"Advantages",{"type":17,"tag":25,"props":1012,"children":1013},{},[1014],{"type":23,"value":1015},"No impact on the original AI programming paradigm",{"type":17,"tag":25,"props":1017,"children":1018},{},[1019],{"type":23,"value":1020},"No impact on the original functional programming paradigm",{"type":17,"tag":25,"props":1022,"children":1023},{},[1024],{"type":23,"value":1025},"Native fused AI and functional programming paradigms that use the same differential mechanism",{"type":17,"tag":25,"props":1027,"children":1028},{},[1029],{"type":23,"value":1030},"Disadvantages",{"type":17,"tag":25,"props":1032,"children":1033},{},[1034],{"type":23,"value":1035},"The derivation logics in the functional programming paradigm and the tensor-based programming paradigm are isolated.",{"type":17,"tag":25,"props":1037,"children":1038},{},[1039],{"type":23,"value":1040},"The logic of the original module needs to be converted through an API.",{"type":17,"tag":25,"props":1042,"children":1043},{},[1044,1046,1051],{"type":23,"value":1045},"The encapsulation management of the ",{"type":17,"tag":39,"props":1047,"children":1048},{},[1049],{"type":23,"value":1050},"nn",{"type":23,"value":1052}," object is defective because the pure functional programming paradigm of JAX.",{"type":17,"tag":25,"props":1054,"children":1055},{},[1056],{"type":23,"value":1057},"/",{"type":17,"tag":25,"props":1059,"children":1060},{},[1061],{"type":23,"value":1062},"The new paradigm of converged programming retains the capability of MindSpore computing graph compilation acceleration. As shown in the following figure, JIT compilation is performed on the outermost function. The module is accelerated with only one line of code, achieving a balance of usability and performance.",{"type":17,"tag":25,"props":1064,"children":1065},{},[1066],{"type":17,"tag":53,"props":1067,"children":1069},{"alt":7,"src":1068},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/c6948abc70054ff38235ee875536af23.png",[],{"type":17,"tag":25,"props":1071,"children":1072},{},[1073,1074,1080],{"type":23,"value":917},{"type":17,"tag":160,"props":1075,"children":1078},{"href":1076,"rel":1077},"https://www.mindspore.cn/tutorials/en/r2.0/beginner/quick_start.html",[164],[1079],{"type":23,"value":1076},{"type":23,"value":525},{"type":17,"tag":25,"props":1082,"children":1083},{},[1084],{"type":17,"tag":39,"props":1085,"children":1086},{},[1087],{"type":23,"value":1088},"9.2 Functional operator invocation is fully supported to further improve network expression capability.",{"type":17,"tag":93,"props":1090,"children":1091},{},[1092,1097],{"type":17,"tag":97,"props":1093,"children":1094},{},[1095],{"type":23,"value":1096},"More than 100 functional APIs, more than 100 tensor APIs, more than 30 computing module APIs of nn, and more than 100 operator primitives are added. Core functions such as the pooling series, sampling interpolation series, linear algebra series, and FFT series APIs are added to further improve the network expression capability.",{"type":17,"tag":97,"props":1098,"children":1099},{},[1100],{"type":23,"value":1101},"Functions of different hardware are unified. APIs that support the Ascend AI Processor, GPU, or CPU account for more than 90% of the total APIs.",{"type":17,"tag":25,"props":1103,"children":1104},{},[1105],{"type":23,"value":1106},"New operators are as follows:",{"type":17,"tag":25,"props":1108,"children":1109},{},[1110],{"type":17,"tag":39,"props":1111,"children":1112},{},[1113],{"type":23,"value":1114},"Core Function Category",{"type":17,"tag":25,"props":1116,"children":1117},{},[1118],{"type":17,"tag":39,"props":1119,"children":1120},{},[1121],{"type":23,"value":1122},"API",{"type":17,"tag":25,"props":1124,"children":1125},{},[1126],{"type":17,"tag":39,"props":1127,"children":1128},{},[1129],{"type":23,"value":1130},"Operator",{"type":17,"tag":25,"props":1132,"children":1133},{},[1134],{"type":17,"tag":39,"props":1135,"children":1136},{},[1137],{"type":23,"value":1138},"Platform",{"type":17,"tag":25,"props":1140,"children":1141},{},[1142],{"type":23,"value":1143},"Pooling series",{"type":17,"tag":25,"props":1145,"children":1146},{},[1147],{"type":23,"value":1148},"ops.adaptive_avg_pool",{"type":17,"tag":25,"props":1150,"children":1151},{},[1152],{"type":23,"value":1153},"AdaptiveAvgPool",{"type":17,"tag":25,"props":1155,"children":1156},{},[1157],{"type":23,"value":1158},"Ascend/GPU/CPU",{"type":17,"tag":25,"props":1160,"children":1161},{},[1162],{"type":23,"value":1163},"ops.max_unpool",{"type":17,"tag":25,"props":1165,"children":1166},{},[1167],{"type":23,"value":1168},"MaxUnpool",{"type":17,"tag":25,"props":1170,"children":1171},{},[1172],{"type":23,"value":1158},{"type":17,"tag":25,"props":1174,"children":1175},{},[1176],{"type":23,"value":1177},"ops.max_pool3d",{"type":17,"tag":25,"props":1179,"children":1180},{},[1181],{"type":23,"value":1182},"MaxPool3DWithArgmax",{"type":17,"tag":25,"props":1184,"children":1185},{},[1186],{"type":23,"value":1158},{"type":17,"tag":25,"props":1188,"children":1189},{},[1190],{"type":23,"value":1191},"ops.fractional_max_pool3d",{"type":17,"tag":25,"props":1193,"children":1194},{},[1195],{"type":23,"value":1196},"FractionalMaxPool3DWithFixedKsize",{"type":17,"tag":25,"props":1198,"children":1199},{},[1200],{"type":23,"value":1158},{"type":17,"tag":25,"props":1202,"children":1203},{},[1204],{"type":23,"value":1205},"Sampling and interpolation series",{"type":17,"tag":25,"props":1207,"children":1208},{},[1209],{"type":23,"value":1210},"ops.grid_sample",{"type":17,"tag":25,"props":1212,"children":1213},{},[1214],{"type":23,"value":1215},"GridSampler",{"type":17,"tag":25,"props":1217,"children":1218},{},[1219],{"type":23,"value":1158},{"type":17,"tag":25,"props":1221,"children":1222},{},[1223],{"type":23,"value":1224},"ops.affine_grid",{"type":17,"tag":25,"props":1226,"children":1227},{},[1228],{"type":23,"value":1229},"AffineGrid",{"type":17,"tag":25,"props":1231,"children":1232},{},[1233],{"type":23,"value":1158},{"type":17,"tag":25,"props":1235,"children":1236},{},[1237],{"type":23,"value":1238},"ops.interpolate",{"type":17,"tag":25,"props":1240,"children":1241},{},[1242],{"type":23,"value":1243},"Nearest/Bilinear/Bicubic",{"type":17,"tag":25,"props":1245,"children":1246},{},[1247],{"type":23,"value":1158},{"type":17,"tag":25,"props":1249,"children":1250},{},[1251],{"type":23,"value":1252},"Linear algebra series",{"type":17,"tag":25,"props":1254,"children":1255},{},[1256],{"type":23,"value":1257},"ops.matrix_diag",{"type":17,"tag":25,"props":1259,"children":1260},{},[1261],{"type":23,"value":1262},"MatrixDiagV3",{"type":17,"tag":25,"props":1264,"children":1265},{},[1266],{"type":23,"value":1158},{"type":17,"tag":25,"props":1268,"children":1269},{},[1270],{"type":23,"value":1271},"ops.matrix_diag_part",{"type":17,"tag":25,"props":1273,"children":1274},{},[1275],{"type":23,"value":1276},"MatrixDiagPartV3",{"type":17,"tag":25,"props":1278,"children":1279},{},[1280],{"type":23,"value":1158},{"type":17,"tag":25,"props":1282,"children":1283},{},[1284],{"type":23,"value":1285},"ops.matrix_set_diag",{"type":17,"tag":25,"props":1287,"children":1288},{},[1289],{"type":23,"value":1290},"MatrixSetDiagV3",{"type":17,"tag":25,"props":1292,"children":1293},{},[1294],{"type":23,"value":1158},{"type":17,"tag":25,"props":1296,"children":1297},{},[1298],{"type":23,"value":1299},"ops.matrix_band_pard",{"type":17,"tag":25,"props":1301,"children":1302},{},[1303],{"type":23,"value":1304},"MatrixBandPart",{"type":17,"tag":25,"props":1306,"children":1307},{},[1308],{"type":23,"value":1158},{"type":17,"tag":25,"props":1310,"children":1311},{},[1312],{"type":23,"value":1313},"ops.matrix_inverse",{"type":17,"tag":25,"props":1315,"children":1316},{},[1317],{"type":23,"value":1318},"MatrixInverse",{"type":17,"tag":25,"props":1320,"children":1321},{},[1322],{"type":23,"value":1158},{"type":17,"tag":25,"props":1324,"children":1325},{},[1326],{"type":23,"value":1327},"ops.matrix_power",{"type":17,"tag":25,"props":1329,"children":1330},{},[1331],{"type":23,"value":1332},"MatrixPower",{"type":17,"tag":25,"props":1334,"children":1335},{},[1336],{"type":23,"value":1337},"Ascend/ CPU",{"type":17,"tag":25,"props":1339,"children":1340},{},[1341],{"type":23,"value":1342},"ops.matrix_solve",{"type":17,"tag":25,"props":1344,"children":1345},{},[1346],{"type":23,"value":1347},"MatrixSolve",{"type":17,"tag":25,"props":1349,"children":1350},{},[1351],{"type":23,"value":1337},{"type":17,"tag":25,"props":1353,"children":1354},{},[1355],{"type":23,"value":1356},"ops.geqrf",{"type":17,"tag":25,"props":1358,"children":1359},{},[1360],{"type":23,"value":1361},"Geqrf",{"type":17,"tag":25,"props":1363,"children":1364},{},[1365],{"type":23,"value":1158},{"type":17,"tag":25,"props":1367,"children":1368},{},[1369],{"type":23,"value":1370},"ops.svd",{"type":17,"tag":25,"props":1372,"children":1373},{},[1374],{"type":23,"value":1375},"Svd",{"type":17,"tag":25,"props":1377,"children":1378},{},[1379],{"type":23,"value":1380},"GPU/CPU",{"type":17,"tag":25,"props":1382,"children":1383},{},[1384],{"type":23,"value":1385},"ops.ormqr",{"type":17,"tag":25,"props":1387,"children":1388},{},[1389],{"type":23,"value":1390},"Ormqr",{"type":17,"tag":25,"props":1392,"children":1393},{},[1394],{"type":23,"value":1395},"GPU",{"type":17,"tag":25,"props":1397,"children":1398},{},[1399],{"type":23,"value":1400},"ops.qr",{"type":17,"tag":25,"props":1402,"children":1403},{},[1404],{"type":23,"value":1405},"Qr",{"type":17,"tag":25,"props":1407,"children":1408},{},[1409],{"type":23,"value":1395},{"type":17,"tag":25,"props":1411,"children":1412},{},[1413],{"type":23,"value":1414},"FFT series",{"type":17,"tag":25,"props":1416,"children":1417},{},[1418],{"type":23,"value":1419},"FFT/FFT2D/FFT3D/IFFT/IFFT2D/IFFT3D/IRFFT/IRFT2D/IRFFT3D/RFFT/RFFT2D/RFFT3D",{"type":17,"tag":25,"props":1421,"children":1422},{},[1423],{"type":23,"value":1424},"FFTWithSize",{"type":17,"tag":25,"props":1426,"children":1427},{},[1428],{"type":23,"value":1158},{"type":17,"tag":25,"props":1430,"children":1431},{},[1432,1433,1439],{"type":23,"value":917},{"type":17,"tag":160,"props":1434,"children":1437},{"href":1435,"rel":1436},"https://gitee.com/mindspore/docs/blob/r2.0/resource/api_updates/func_api_updates_en.md",[164],[1438],{"type":23,"value":1435},{"type":23,"value":525},{"type":17,"tag":25,"props":1441,"children":1442},{},[1443],{"type":17,"tag":39,"props":1444,"children":1445},{},[1446],{"type":23,"value":1447},"10. MSAdapter seamlessly adapts models of third-party ecosystems to MindSpore, improving migration efficiency.",{"type":17,"tag":25,"props":1449,"children":1450},{},[1451,1453,1458,1460,1465,1466,1471,1472,1477,1479,1484],{"type":23,"value":1452},"MSAdapter is a MindSpore API adaptation tool developed by the Pengcheng OpenI Community. It allows PyTorch native code to run efficiently on Ascend-based MindSpore environment. MSAdapter v0.1 demo has been released with support for more than 1000 network expression and data processing APIs, including ",{"type":17,"tag":39,"props":1454,"children":1455},{},[1456],{"type":23,"value":1457},"torch",{"type":23,"value":1459},", ",{"type":17,"tag":39,"props":1461,"children":1462},{},[1463],{"type":23,"value":1464},"torch.nn",{"type":23,"value":1459},{"type":17,"tag":39,"props":1467,"children":1468},{},[1469],{"type":23,"value":1470},"torch.nn.function",{"type":23,"value":1459},{"type":17,"tag":39,"props":1473,"children":1474},{},[1475],{"type":23,"value":1476},"tensor",{"type":23,"value":1478},", and ",{"type":17,"tag":39,"props":1480,"children":1481},{},[1482],{"type":23,"value":1483},"torch.utils.data",{"type":23,"value":1485},", as well as torchvision APIs. In addition, more than 70 mainstream PyTorch models have been verified for migration. You can migrate models in CV and NLP domains with only a few adaptations. The interface learning and script migration costs are reduced by more than 90%.",{"type":17,"tag":25,"props":1487,"children":1488},{},[1489],{"type":17,"tag":53,"props":1490,"children":1492},{"alt":7,"src":1491},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/0f60f31a52e3491398db91b00a7eb66d.png",[],{"type":17,"tag":25,"props":1494,"children":1495},{},[1496],{"type":23,"value":1497},"For example, to port the PyTorch code of AlexNet to MindSpore using MSAdapter:",{"type":17,"tag":93,"props":1499,"children":1500},{},[1501],{"type":17,"tag":97,"props":1502,"children":1503},{},[1504],{"type":23,"value":1505},"Change the packages to be imported.",{"type":17,"tag":25,"props":1507,"children":1508},{},[1509],{"type":17,"tag":53,"props":1510,"children":1512},{"alt":7,"src":1511},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/de40e5b98dcf4cd39640d29fa5386acf.png",[],{"type":17,"tag":93,"props":1514,"children":1516},{"start":1515},2,[1517],{"type":17,"tag":97,"props":1518,"children":1519},{},[1520],{"type":23,"value":1521},"Data processing APIs are compatible.",{"type":17,"tag":25,"props":1523,"children":1524},{},[1525],{"type":17,"tag":53,"props":1526,"children":1528},{"alt":7,"src":1527},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/0fcdad3bab8f4fcd8433a482a289fd1d.png",[],{"type":17,"tag":93,"props":1530,"children":1532},{"start":1531},3,[1533],{"type":17,"tag":97,"props":1534,"children":1535},{},[1536],{"type":23,"value":1537},"Model definition APIs are compatible.",{"type":17,"tag":25,"props":1539,"children":1540},{},[1541],{"type":17,"tag":53,"props":1542,"children":1544},{"alt":7,"src":1543},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/2ef83661c2524548a35459c4266aedf0.png",[],{"type":17,"tag":93,"props":1546,"children":1548},{"start":1547},4,[1549],{"type":17,"tag":97,"props":1550,"children":1551},{},[1552],{"type":23,"value":1553},"Modify the model training code to use MindSpore APIs.",{"type":17,"tag":25,"props":1555,"children":1556},{},[1557],{"type":17,"tag":53,"props":1558,"children":1560},{"alt":7,"src":1559},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/ac87035b36d34dcfac85e1e18f66c7d2.png",[],{"type":17,"tag":25,"props":1562,"children":1563},{},[1564],{"type":23,"value":1565},"The overall performance will be further improved, and more PyTorch APIs will be supported in the future.",{"type":17,"tag":25,"props":1567,"children":1568},{},[1569,1571],{"type":23,"value":1570},"MSAdapter: ",{"type":17,"tag":160,"props":1572,"children":1575},{"href":1573,"rel":1574},"https://openi.pcl.ac.cn/OpenI/MSAdapter",[164],[1576],{"type":23,"value":1573},{"type":17,"tag":25,"props":1578,"children":1579},{},[1580],{"type":17,"tag":39,"props":1581,"children":1582},{},[1583],{"type":23,"value":1584},"11. Tutorials and documents are upgraded.",{"type":17,"tag":93,"props":1586,"children":1587},{},[1588],{"type":17,"tag":97,"props":1589,"children":1590},{},[1591],{"type":23,"value":1592},"MindSpore provides multi-domain application tutorials to help beginners quickly get started with MindSpore. Code and notebook of each tutorial can be downloaded in one click, facilitating quick deployment and experience.",{"type":17,"tag":25,"props":1594,"children":1595},{},[1596,1598],{"type":23,"value":1597},"MindSpore Tutorials: ",{"type":17,"tag":160,"props":1599,"children":1602},{"href":1600,"rel":1601},"https://www.mindspore.cn/tutorials/en/master/index.html",[164],[1603],{"type":23,"value":1600},{"type":17,"tag":93,"props":1605,"children":1606},{"start":1515},[1607],{"type":17,"tag":97,"props":1608,"children":1609},{},[1610],{"type":23,"value":1611},"For users familiar with other AI frameworks, a migration guide is provided, covering detailed instructions, typical differences about other frameworks, and precautions.",{"type":17,"tag":25,"props":1613,"children":1614},{},[1615,1617],{"type":23,"value":1616},"Migration guide: ",{"type":17,"tag":160,"props":1618,"children":1621},{"href":1619,"rel":1620},"https://www.mindspore.cn/docs/en/master/migration_guide/overview.html",[164],[1622],{"type":23,"value":1619},{"type":17,"tag":93,"props":1624,"children":1625},{"start":1531},[1626],{"type":17,"tag":97,"props":1627,"children":1628},{},[1629],{"type":23,"value":1630},"A knowledge map is composed to guide you through MindSpore documents, covering quick start, in-depth development, and application extension.",{"type":17,"tag":25,"props":1632,"children":1633},{},[1634,1636],{"type":23,"value":1635},"Knowledge map: ",{"type":17,"tag":160,"props":1637,"children":1640},{"href":1638,"rel":1639},"https://www.mindspore.cn/resources/knowledgeMap/en",[164],[1641],{"type":23,"value":1638},{"type":17,"tag":25,"props":1643,"children":1644},{},[1645],{"type":17,"tag":39,"props":1646,"children":1647},{},[1648],{"type":23,"value":1649},"12. The flexibility of dataset pipelines and usability of MindRecord are improved.",{"type":17,"tag":25,"props":1651,"children":1652},{},[1653],{"type":17,"tag":39,"props":1654,"children":1655},{},[1656],{"type":23,"value":1657},"12.1 The data processing pipeline now supports all Python data types.",{"type":17,"tag":25,"props":1659,"children":1660},{},[1661],{"type":23,"value":1662},"As data processing scenarios become increasingly complex, more powerful data structures are required for data organization and management. In MindSpore 2.0, support for Python native dictionaries is added to the dataset processing pipeline. You can store data in a dictionary and send the dictionary to the next pipeline node for processing. You can also use the dict type to manage various Python objects and data and change data in dictionaries at any time during data processing, simplifying data usage and processing.",{"type":17,"tag":25,"props":1664,"children":1665},{},[1666,1668,1674],{"type":23,"value":1667},"For details, see: ",{"type":17,"tag":160,"props":1669,"children":1672},{"href":1670,"rel":1671},"https://www.mindspore.cn/tutorials/en/r2.0/advanced/dataset/python_objects.html",[164],[1673],{"type":23,"value":1670},{"type":23,"value":525},{"type":17,"tag":25,"props":1676,"children":1677},{},[1678],{"type":17,"tag":39,"props":1679,"children":1680},{},[1681],{"type":23,"value":1682},"12.2 The functionality and performance of MindRecord are enhanced.",{"type":17,"tag":25,"props":1684,"children":1685},{},[1686],{"type":17,"tag":39,"props":1687,"children":1688},{},[1689],{"type":23,"value":1690},"12.2.1 FileReader can obtain the schema information and number of samples of data in the MindRecord format.",{"type":17,"tag":25,"props":1692,"children":1693},{},[1694,1696,1701,1703,1708],{"type":23,"value":1695},"The ",{"type":17,"tag":39,"props":1697,"children":1698},{},[1699],{"type":23,"value":1700},"schema()",{"type":23,"value":1702}," interface is added to FileReader to obtain the schema of MindRecord data, helping view and analyze the MindRecord format (data field names, data types, and data dimensions). The ",{"type":17,"tag":39,"props":1704,"children":1705},{},[1706],{"type":23,"value":1707},"len()",{"type":23,"value":1709}," interface is added to obtain the number of samples contained in MindRecord data, further improving user usability.",{"type":17,"tag":25,"props":1711,"children":1712},{},[1713],{"type":17,"tag":39,"props":1714,"children":1715},{},[1716],{"type":23,"value":1717},"12.2.2 MindRecord write speed increases by 10x.",{"type":17,"tag":25,"props":1719,"children":1720},{},[1721,1723,1728],{"type":23,"value":1722},"MindSpore 2.0 optimizes the performance of the MindRecord write API ",{"type":17,"tag":39,"props":1724,"children":1725},{},[1726],{"type":23,"value":1727},"FileWriter",{"type":23,"value":1729}," through concurrent write at the Python layer, optimization of data transfer from the Python layer to C++ layer, and multi-thread concurrent conversion at the C++ layer. As a result, the write performance of MindRecord is improved by 10 times.",{"type":17,"tag":25,"props":1731,"children":1732},{},[1733,1735,1740],{"type":23,"value":1734},"Take ImageNet dataset (1,281,167 samples, 140 GB) as an example. (Data stored on an NVMe SSD. ",{"type":17,"tag":39,"props":1736,"children":1737},{},[1738],{"type":23,"value":1739},"shard_num",{"type":23,"value":1741}," is 16)",{"type":17,"tag":25,"props":1743,"children":1744},{},[1745],{"type":23,"value":1746},"parallel_wirter",{"type":17,"tag":25,"props":1748,"children":1749},{},[1750],{"type":23,"value":1751},"Time Required Before Optimization",{"type":17,"tag":25,"props":1753,"children":1754},{},[1755],{"type":23,"value":1756},"Time Required After Optimization",{"type":17,"tag":25,"props":1758,"children":1759},{},[1760],{"type":23,"value":1761},"False",{"type":17,"tag":25,"props":1763,"children":1764},{},[1765],{"type":23,"value":1766},"118 minutes",{"type":17,"tag":25,"props":1768,"children":1769},{},[1770],{"type":23,"value":1771},"13 minutes 27 seconds",{"type":17,"tag":25,"props":1773,"children":1774},{},[1775],{"type":23,"value":1776},"True",{"type":17,"tag":25,"props":1778,"children":1779},{},[1780],{"type":23,"value":1781},"78 minutes",{"type":17,"tag":25,"props":1783,"children":1784},{},[1785],{"type":23,"value":1786},"7 minutes and 30 seconds",{"type":17,"tag":25,"props":1788,"children":1789},{},[1790],{"type":17,"tag":39,"props":1791,"children":1792},{},[1793],{"type":23,"value":1794},"12.2.3 Loading of ultra-large datasets (5 million samples or more) is optimized to reduce the required memory by 40%.",{"type":17,"tag":25,"props":1796,"children":1797},{},[1798],{"type":23,"value":1799},"Training data is massive in the foundation model training scenario. During data pre-loading, the index obtaining overhead is huge and is limited by the memory. In MindSpore 2.0, the lazy loading mode of MindRecord is optimized to simplify data structure loading. Only necessary metadata (sample IDs, sample start offsets, and sample end offsets) is loaded. This mode is suitable for large model training and multi-modal data loading scenarios.",{"type":17,"tag":25,"props":1801,"children":1802},{},[1803],{"type":17,"tag":39,"props":1804,"children":1805},{},[1806],{"type":23,"value":1807},"13. The error reporting mechanism is optimized to provide systematic guidance, simplifying problem solving.",{"type":17,"tag":25,"props":1809,"children":1810},{},[1811],{"type":23,"value":1812},"In MindSpore 2.0, the structure and content of error information in multiple error scenarios are optimized. The optimized error information is more readable, helping users quickly classify and understand the error.",{"type":17,"tag":25,"props":1814,"children":1815},{},[1816],{"type":23,"value":1817},"Take the error reported in the Ascend environment as an example.",{"type":17,"tag":25,"props":1819,"children":1820},{},[1821],{"type":23,"value":1822},"The rank table solution is used to start parallel training. The training fails due to a faulty configuration file. The error information generated by the Ascend hardware and MindSpore is mixed, and the error and solution are unclear.",{"type":17,"tag":25,"props":1824,"children":1825},{},[1826],{"type":17,"tag":53,"props":1827,"children":1829},{"alt":7,"src":1828},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/e6f3141b86804aa4986e2cb7d25b3164.png",[],{"type":17,"tag":25,"props":1831,"children":1832},{},[1833],{"type":23,"value":1834},"The optimized information displays Ascend and MindSpore errors separately, gives detailed causes of errors, and describes common error codes.",{"type":17,"tag":25,"props":1836,"children":1837},{},[1838],{"type":17,"tag":53,"props":1839,"children":1841},{"alt":7,"src":1840},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/e9d2cfa9a3eb4a249b35c4cbd7f81ff9.png",[],{"type":17,"tag":25,"props":1843,"children":1844},{},[1845,1846,1852],{"type":23,"value":917},{"type":17,"tag":160,"props":1847,"children":1850},{"href":1848,"rel":1849},"https://www.mindspore.cn/tutorials/experts/en/r2.0/debug/function_debug.html",[164],[1851],{"type":23,"value":1848},{"type":23,"value":525},{"type":17,"tag":25,"props":1854,"children":1855},{},[1856],{"type":17,"tag":39,"props":1857,"children":1858},{},[1859],{"type":23,"value":1860},"14. MindSpore Dev Toolkit supports API mapping scanning and provides a VSCode extension for intelligent code completion.",{"type":17,"tag":25,"props":1862,"children":1863},{},[1864],{"type":17,"tag":39,"props":1865,"children":1866},{},[1867],{"type":23,"value":1868},"14.1 Mappable APIs throughout a file or project can be scanned with one click.",{"type":17,"tag":25,"props":1870,"children":1871},{},[1872],{"type":23,"value":1873},"MindSpore Dev Toolkit is a development plug-in that supports quick search of API mappings inside an IDE. To further simplify model migration, the API mapping scanning function is added to MindSpore Dev Toolkit, allowing you to obtain all PyTorch APIs in your code that can directly map to MindSpore APIs and view API documents. Scanning can be performed on a file or throughout a project to provide summarized API mappings.",{"type":17,"tag":25,"props":1875,"children":1876},{},[1877],{"type":17,"tag":39,"props":1878,"children":1879},{},[1880],{"type":23,"value":1881},"14.2 A VSCode extension is provided to enable intelligent code completion.",{"type":17,"tag":25,"props":1883,"children":1884},{},[1885],{"type":23,"value":1886},"Besides PyCharm, MindSpore Dev Toolkit also launches a VSCode extension to help simplify multi-platform development. The extension helps increase coding efficiency by 30% with a completion accuracy of 80%.",{"type":17,"tag":25,"props":1888,"children":1889},{},[1890],{"type":17,"tag":53,"props":1891,"children":1893},{"alt":7,"src":1892},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/23262e35e7674965920e94efd89fc8f4.png",[],{"type":17,"tag":25,"props":1895,"children":1896},{},[1897,1898,1904],{"type":23,"value":917},{"type":17,"tag":160,"props":1899,"children":1902},{"href":1900,"rel":1901},"https://gitee.com/mindspore/ide-plugin",[164],[1903],{"type":23,"value":1900},{"type":23,"value":525},{"type":17,"tag":25,"props":1906,"children":1907},{},[1908],{"type":17,"tag":39,"props":1909,"children":1910},{},[1911],{"type":23,"value":1912},"Scientific Computing",{"type":17,"tag":25,"props":1914,"children":1915},{},[1916],{"type":17,"tag":39,"props":1917,"children":1918},{},[1919],{"type":23,"value":1920},"15. Use MindSpore Flow to simulate fluid with efficiency and simplicity.",{"type":17,"tag":25,"props":1922,"children":1923},{},[1924],{"type":23,"value":1925},"Fluid mechanics is closely related to the R&D of aerospace, marine equipment, and energy and electricity. However, traditional computational fluid mechanics faces problems such as complex meshing, high computational dependency, and inability to balance accuracy and performance. These challenges bring new opportunities for AI+scientific computing. MindFlow is an AI fluid simulation toolkit based on MindSpore. It provides AI fluid simulation driven by physics, data, and physics-data fusion and end-to-end differentiable CFD solvers. MindFlow provides 14 cases in the fluid dynamics field and 8 network models to fully explore the fitting capability of neural networks. MindFlow aims to build efficient and accurate AI flow field simulation tools to accelerate model development and meet related scientific research and engineering requirements.",{"type":17,"tag":25,"props":1927,"children":1928},{},[1929],{"type":17,"tag":53,"props":1930,"children":1932},{"alt":7,"src":1931},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/d6cbaf83a2de4c339f0ec4dcdb5a86ff.png",[],{"type":17,"tag":25,"props":1934,"children":1935},{},[1936],{"type":23,"value":1937},"The AI simulation model library (ModelZoo) implements physics-driven, data-driven, and physics-data fusion-driven AI fluid simulations. Differentiable CFD solvers, open-source datasets, and governing equations are provided to implement classical flow and fluid simulation in various application fields.",{"type":17,"tag":25,"props":1939,"children":1940},{},[1941],{"type":17,"tag":39,"props":1942,"children":1943},{},[1944],{"type":23,"value":1945},"15.1 Physics-driven AI fluid simulation",{"type":17,"tag":25,"props":1947,"children":1948},{},[1949],{"type":23,"value":1950},"Physics-driven AI fluid simulation is performed based on physic-informed neural networks (PINNs). MindFlow provides easy-to-use APIs for physics-driven AI fluid simulation and supports symbolic partial differential equations defined using SymPy and efficient geometric sampling and rich neural network architectures, facilitating the quick solution of fluid partial differential equations.",{"type":17,"tag":25,"props":1952,"children":1953},{},[1954],{"type":17,"tag":39,"props":1955,"children":1956},{},[1957],{"type":23,"value":1958},"15.2 Data-driven AI fluid simulation",{"type":17,"tag":25,"props":1960,"children":1961},{},[1962],{"type":23,"value":1963},"Data-driven AI fluid simulation uses AI methods to reveal the associations among flow field data, thereby implementing intelligent simulation of flow fields. MindFlow released the first industrial fluid simulation model DongFang YuFeng and the related dataset. You can call related APIs to implement model training and fast flow field simulation. In addition, MindFlow provides multiple neural network model libraries, such as Fourier neural operators, Koopman neural operators, and Vision Transformer, as well as related case data.",{"type":17,"tag":25,"props":1965,"children":1966},{},[1967],{"type":17,"tag":39,"props":1968,"children":1969},{},[1970],{"type":23,"value":1971},"15.3 Physics-data fusion-driven AI fluid simulation",{"type":17,"tag":25,"props":1973,"children":1974},{},[1975],{"type":23,"value":1976},"Physics information and data are fused to implement AI fluid simulation, resulting in less data dependency and stronger generalization capability. PDE-Net is a typical physics-data fusion-driven AI fluid simulation model. MindFlow provides related interfaces. You can learn convolution kernels to approximate differential operators and identify observed partial differential equations.",{"type":17,"tag":25,"props":1978,"children":1979},{},[1980],{"type":17,"tag":39,"props":1981,"children":1982},{},[1983],{"type":23,"value":1984},"15.4 End-to-end differential solver",{"type":17,"tag":25,"props":1986,"children":1987},{},[1988],{"type":23,"value":1989},"MindFlow implements the traditional compressible fluid solution process based on the MindSpore framework and launches the end-to-end differential solver MindSpore Flow CFD, the solver supports WENO5 reconstruction, Rusanov fluxes, Runge-Kutta methods, and multiple boundary conditions to meet the requirements of multiple basic flows such as shock tubes 2D Riemann problem.",{"type":17,"tag":25,"props":1991,"children":1992},{},[1993,1994,2000],{"type":23,"value":917},{"type":17,"tag":160,"props":1995,"children":1998},{"href":1996,"rel":1997},"https://gitee.com/mindspore/mindscience/tree/master/MindFlow",[164],[1999],{"type":23,"value":1996},{"type":23,"value":525},{"type":17,"tag":25,"props":2002,"children":2003},{},[2004],{"type":17,"tag":39,"props":2005,"children":2006},{},[2007],{"type":23,"value":2008},"16. MindSPONGE 1.0 is released with upgraded architecture and support for 20+ SOTA Computational Biology models.",{"type":17,"tag":25,"props":2010,"children":2011},{},[2012],{"type":23,"value":2013},"MindSPONGE 1.0, an AI biology suite based on MindSpore 2.0, is released with upgraded architecture and more than 20 mainstream models to cover the entire drug R&D process, including industry and self-developed models, such as MEGA-Protein for protein structure prediction, ESM-IF for protein design, Pafnucy for molecular property prediction, MEGA-EvoGen for MSA generation, and MEGA-Assessment for structural quality assessment.",{"type":17,"tag":25,"props":2015,"children":2016},{},[2017],{"type":17,"tag":53,"props":2018,"children":2020},{"alt":7,"src":2019},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/d7d2896d9cca4556831495a02c916000.png",[],{"type":17,"tag":25,"props":2022,"children":2023},{},[2024],{"type":23,"value":2025},"Models provided by MindSPONGE 1.0 covers molecular characterization, structure prediction, property prediction, molecular design, and basic models. In the drug target and biological marker selection phases, MindSPONGE provides models such as MEGA-Fold and AlphaFold Multimer for molecular structure prediction and ProteinMPNN and ColabDesign for protein design. In the lead compound determination phase, MindSPONGE provides models for molecular characterization and basic models, such as MolCT, SchNet, PhysNet, GROVER, and MG-BERT. In the active compound screening phase, MindSPONGE provides models for molecular property prediction, sucha as GraphDTA and Pafnucy.",{"type":17,"tag":25,"props":2027,"children":2028},{},[2029],{"type":23,"value":2030},"In earlier versions, MindSPONGE has integrated the protein structure prediction model MEGA-Fold. In version 1.0, MindSPONGE released the MSA generation enhancement tool MEGA-EvoGen and the protein structure comparison assessment tool MEGA-Assessment to address the limitation that AlphaFold 2 cannot make accurate predictions in MSA-poor scenarios such as \"orphan sequences\", highly mutant sequences and artificial proteins, and the lack of protein assessment tools. These tools break through the limitations of AlphaFold 2 itself.",{"type":17,"tag":25,"props":2032,"children":2033},{},[2034,2036,2042],{"type":23,"value":2035},"To help users quickly get started, MindSPONGE now supports the pipeline running mode and provides a unified interface for the preceding models, allowing training and inference tasks to be executed with only one line of code. MindSPONGE has supported most models. For details, see ",{"type":17,"tag":160,"props":2037,"children":2040},{"href":2038,"rel":2039},"https://gitee.com/mindspore/mindscience/tree/master/MindSPONGE/applications",[164],[2041],{"type":23,"value":2038},{"type":23,"value":525},{"type":17,"tag":25,"props":2044,"children":2045},{},[2046],{"type":17,"tag":39,"props":2047,"children":2048},{},[2049],{"type":23,"value":2050},"17. MindElec 0.2 is released with the AI electromagnetic simulation basic model \"Jinling Electromagnetic Brain\" and the differentiable electromagnetic solver AD_FDTD.",{"type":17,"tag":25,"props":2052,"children":2053},{},[2054],{"type":23,"value":2055},"The MindElec electromagnetic simulation suite is upgraded to version 0.2 with two new features added, that is, the AI electromagnetic simulation basic model \"Jinling Electromagnetic Brain\" for large-scale array antennas jointly developed by MindSpore and Southeast University, and the end-to-end differentiable electromagnetic solver AD_FDTD (Automatic Differentiation Finite-Difference Time-Domain) developed by MindSpore and Noah's Ark Laboratory.",{"type":17,"tag":25,"props":2057,"children":2058},{},[2059],{"type":23,"value":2060},"Large-scale array antennas are widely used by 5G base stations and autonomous driving. However, the array antennas have large array size and complex unit composition, requiring fast and accurate simulation. The computational complexity of the traditional sub-entire domain (SED) algorithm is O(9M+N), where M denotes the number of parameters per array unit in the SED and N denotes the number of array units, with M remaining almost constant and N growing linearly with the array size. AI replaces the most complex part of it and reduces the computational complexity to O(9M+1). With the fusion of physics and data, Jingling Electromagnetic Brain achieves the accuracy of traditional methods with a 10x increase in efficiency. As the target size increases, the improvement will become more significant.",{"type":17,"tag":25,"props":2062,"children":2063},{},[2064],{"type":23,"value":2065},"AI electromagnetic simulation model flowchart of Jinling Electromagnetic Brain",{"type":17,"tag":25,"props":2067,"children":2068},{},[2069],{"type":17,"tag":53,"props":2070,"children":2072},{"alt":7,"src":2071},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/21705a6ab8384f398a5c00d3f64392ac.png",[],{"type":17,"tag":25,"props":2074,"children":2075},{},[2076],{"type":23,"value":2077},"The AI method significantly improves performance while maintaining accuracy.",{"type":17,"tag":25,"props":2079,"children":2080},{},[2081],{"type":17,"tag":53,"props":2082,"children":2084},{"alt":7,"src":2083},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/4abd84d925a54da1930d931c0a6db0fe.png",[],{"type":17,"tag":25,"props":2086,"children":2087},{},[2088],{"type":23,"value":2089},"Distant field of the 30 x 30 array",{"type":17,"tag":25,"props":2091,"children":2092},{},[2093],{"type":17,"tag":53,"props":2094,"children":2096},{"alt":7,"src":2095},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/da7226c791534cea89c3f72512e9c804.png",[],{"type":17,"tag":25,"props":2098,"children":2099},{},[2100],{"type":23,"value":2101},"Distant field of the 40 x 40 array",{"type":17,"tag":25,"props":2103,"children":2104},{},[2105],{"type":23,"value":2106},"AD_FDTD can rewrite the FDTD forward solving process using MindSpore's neural network operators, and can use MindSpore's automatic differentiation capability for end-to-end optimization of the medium parameters in the electromagnetic inverse problem. AD_FDTD has been validated in the patch antenna, patch filter, and 2D electromagnetic backscattering scenarios. In the patch antenna and patch filter cases, the S-parameter simulation accuracy is comparable to that of the conventional numerical methods; in the 2D electromagnetic backscattering case, the structural similarity index measure (SSIM) of the dielectric parameters obtained through inversion reaches 96%.",{"type":17,"tag":25,"props":2108,"children":2109},{},[2110],{"type":17,"tag":53,"props":2111,"children":2113},{"alt":7,"src":2112},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/24a8b63cd68f4d50b3e95d6ee3b7b3a2.png",[],{"type":17,"tag":25,"props":2115,"children":2116},{},[2117],{"type":17,"tag":53,"props":2118,"children":2120},{"alt":7,"src":2119},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/10/2f7b6aca48254d7e8320304874c437aa.png",[],{"type":17,"tag":25,"props":2122,"children":2123},{},[2124],{"type":23,"value":2125},"Electromagnetic backscattering and the inversion effect",{"type":17,"tag":25,"props":2127,"children":2128},{},[2129,2130,2136],{"type":23,"value":917},{"type":17,"tag":160,"props":2131,"children":2134},{"href":2132,"rel":2133},"https://gitee.com/mindspore/mindscience/tree/master/MindElec",[164],[2135],{"type":23,"value":2132},{"type":23,"value":525},{"title":7,"searchDepth":1547,"depth":1547,"links":2138},[],"markdown","content:version-updates:en:2617.md","content","version-updates/en/2617.md","version-updates/en/2617","md",1776506143063]