[{"data":1,"prerenderedAt":550},["ShallowReactive",2],{"content-query-AyGS4IQYjb":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"type":11,"body":12,"_type":544,"_id":545,"_source":546,"_file":547,"_stem":548,"_extension":549},"/version-updates/en/2_9_en","en",false,"","MindSpore 2.9 Is Officially Released, Featuring the Innovative Graph-Free Fusion Technology, Upgrading Core Binding Capability, and Improving Training Stability and Efficiency","the innovative graph-free fusion technology supports native automatic operator fusion and optimization of dynamic graphs, achieving network model performance benefits","2026-05-11","version-updates",{"type":13,"children":14,"toc":519},"root",[15,23,29,36,43,48,53,63,68,79,92,98,103,108,128,139,145,150,157,164,183,194,200,211,228,234,239,245,258,264,308,314,319,324,330,335,340,347,352,357,367,373,379,384,389,402,407,414,420,425,453,458,468,474,479,492,497,504,514],{"type":16,"tag":17,"props":18,"children":20},"element","h1",{"id":19},"mindspore-29-is-officially-released-featuring-the-innovative-graph-free-fusion-technology-upgrading-core-binding-capability-and-improving-training-stability-and-efficiency",[21],{"type":22,"value":8},"text",{"type":16,"tag":24,"props":25,"children":26},"p",{},[27],{"type":22,"value":28},"After months of development and contributions from the MindSpore open-source community, the MindSpore 2.9 framework is now available.\nIn terms of the basic framework evolution, the innovative graph-free fusion technology supports native automatic operator fusion and optimization of dynamic graphs, achieving network model performance benefits. It enables Triton Ascend operators, lowering the threshold for developing custom operators. In addition, the operator dispatch interception mechanism is added to support custom control of operator execution. Furthermore, core binding at the process level, thread level, and NUMA level is supported, improving the stability of large-scale training and the execution efficiency on the host.\nIn terms of scientific computing suites, the MindSpore Science intelligent system is released, covering the entire end-to-end scientific research process and greatly shortening the scientific research period.\nNow, let's delve into the key features of MindSpore 2.9.",{"type":16,"tag":30,"props":31,"children":33},"h2",{"id":32},"continuous-evolution-of-the-basic-framework",[34],{"type":22,"value":35},"-- Continuous Evolution of the Basic Framework --",{"type":16,"tag":37,"props":38,"children":40},"h3",{"id":39},"_1-innovative-graph-free-fusion-technology-enabling-native-automatic-operator-fusion-optimization-for-dynamic-graphs",[41],{"type":22,"value":42},"1 Innovative Graph-Free Fusion Technology Enabling Native Automatic Operator Fusion Optimization for Dynamic Graphs",{"type":16,"tag":24,"props":44,"children":45},{},[46],{"type":22,"value":47},"In mainstream AI frameworks, automatic operator fusion has always been a crucial performance optimization method. However, the current mainstream automatic operator fusion solutions rely on static computational graphs for fusion. This approach not only limits the scope of automatic operator fusion, but also imposes unnecessary limitations on achieving high-performance model code.",{"type":16,"tag":24,"props":49,"children":50},{},[51],{"type":22,"value":52},"To address this issue, MindSpore has introduced a unique real-time graph-free fusion technology, which is officially implemented in version 2.9. This technology enables automatic real-time operator fusion optimization for users' native Python dynamic graph models, without requiring any modification or adaptation to the user models.\nYou can enable this feature by setting the following environment variable:",{"type":16,"tag":54,"props":55,"children":57},"pre",{"code":56},"export MS_DEV_PYNATIVE_FUSION_FLAGS=\"--opt_level=1 -- enable_ops=Dense,MatMul,BatchMatMul,BatchMatMulExt,MatMulExt\"\n",[58],{"type":16,"tag":59,"props":60,"children":61},"code",{"__ignoreMap":7},[62],{"type":22,"value":56},{"type":16,"tag":24,"props":64,"children":65},{},[66],{"type":22,"value":67},"The graph-free fusion technology is built based on the DVM real-time operator compilation and execution framework that has been incubated in the MindSpore community for more than two years. The main technical principle is to automatically identify operators that can be fused during the dynamic graph operator delivery and add them to the cache pool for fusion. When operators that cannot be fused or value dependencies are encountered, the operators in the cache pool are fused and delivered for execution in real time based on the DVM. The graph-free fusion technology can improve the performance of network models by about 5% to 15% on average.",{"type":16,"tag":69,"props":70,"children":72},"div",{"style":71},"text-align: center;",[73],{"type":16,"tag":74,"props":75,"children":78},"img",{"src":76,"style":77,"alt":7},"/category/information/version-updates/banner/en/2_9_en/1.jpg","display: block;margin: 0 auto;max-width:60%",[],{"type":16,"tag":24,"props":80,"children":81},{},[82,84],{"type":22,"value":83},"To help users better use all DVM fusion capabilities, including this feature, and to promote the prosperity of the community ecosystem, the DVM code has been open-sourced.\nOpen-source address: ",{"type":16,"tag":85,"props":86,"children":90},"a",{"href":87,"rel":88},"https://gitcode.com/mindspore/dvm",[89],"nofollow",[91],{"type":22,"value":87},{"type":16,"tag":37,"props":93,"children":95},{"id":94},"_2-enabling-triton-ascend-operator-to-lower-the-barrier-for-developing-custom-operators",[96],{"type":22,"value":97},"2 Enabling Triton Ascend Operator to Lower the Barrier for Developing Custom Operators",{"type":16,"tag":24,"props":99,"children":100},{},[101],{"type":22,"value":102},"As the scale of AI models continues to expand and architecture innovation accelerates, the demand for higher execution efficiency and development flexibility of underlying operators is increasing. Traditional custom operator development typically relies on underlying interfaces such as C++ and ACL, which presents challenges such as long development cycles, complex debugging, and high cross-platform migration costs.",{"type":16,"tag":24,"props":104,"children":105},{},[106],{"type":22,"value":107},"As a Python-based domain-specific language (DSL), Triton significantly lowers the barrier for developing high-performance GPU/NPU operators through its declarative programming model and automatic optimization mechanism. The triton-ascend project further extends Triton's support for the Ascend NPU architecture and achieves seamless integration with the MindSpore framework. This improvement brings three core benefits:",{"type":16,"tag":109,"props":110,"children":111},"ul",{},[112,118,123],{"type":16,"tag":113,"props":114,"children":115},"li",{},[116],{"type":22,"value":117},"Simplified access: Users can directly use Python syntax to compile and call the Triton kernel without needing to understand the internal operator registration mechanism (such as custom primitives) in MindSpore.",{"type":16,"tag":113,"props":119,"children":120},{},[121],{"type":22,"value":122},"Seamless integration: Triton operators can be directly integrated into MindSpore's computational graph, supporting auto differentiation and backward propagation, and can participate in model training just like built-in operators.",{"type":16,"tag":113,"props":124,"children":125},{},[126],{"type":22,"value":127},"Ecosystem compatibility: It is highly compatible with the community's Triton operator code assets, resulting in extremely low porting costs.",{"type":16,"tag":24,"props":129,"children":130},{},[131,133],{"type":22,"value":132},"Reference link: ",{"type":16,"tag":85,"props":134,"children":137},{"href":135,"rel":136},"https://gitcode.com/Ascend/triton-ascend",[89],[138],{"type":22,"value":135},{"type":16,"tag":37,"props":140,"children":142},{"id":141},"_3-adding-operator-dispatch-interception-mechanism-to-support-custom-control-of-operator-execution",[143],{"type":22,"value":144},"3 Adding Operator Dispatch Interception Mechanism to Support Custom Control of Operator Execution",{"type":16,"tag":24,"props":146,"children":147},{},[148],{"type":22,"value":149},"During actual development, users often have the following requirements: knowing which operators are executed in a code segment, performing some custom processing before and after operator execution, or customizing the execution mode of operators using a Tensor subclass. Previously, these requirements often required modifying the internal code of the framework. However, in MindSpore 2.9, an operator dispatch interception mechanism has been introduced, providing two complementary methods: Tensor subclass dispatching and Guard context manager. These methods allow users to flexibly control operator execution behavior without modifying the framework source code.",{"type":16,"tag":69,"props":151,"children":152},{"style":71},[153],{"type":16,"tag":74,"props":154,"children":156},{"src":155,"style":77,"alt":7},"/category/information/version-updates/banner/en/2_9_en/2.jpg",[],{"type":16,"tag":158,"props":159,"children":161},"h4",{"id":160},"_31-custom-tensor-subclass-scheduling",[162],{"type":22,"value":163},"3.1 Custom Tensor Subclass Scheduling",{"type":16,"tag":24,"props":165,"children":166},{},[167,169,175,177,181],{"type":22,"value":168},"The ",{"type":16,"tag":170,"props":171,"children":172},"strong",{},[173],{"type":22,"value":174},"ms_dispatch",{"type":22,"value":176}," protocol is added. You can create a subclass of Tensor and implement the class method to intercept the operator call on the subclass instance. When the framework detects that the input of an operator contains a tensor subclass with ",{"type":16,"tag":170,"props":178,"children":179},{},[180],{"type":22,"value":174},{"type":22,"value":182},", it transfers the execution flow to the user-defined scheduling method, and the user determines how to handle the operator call.",{"type":16,"tag":24,"props":184,"children":185},{},[186,188,192],{"type":22,"value":187},"A typical application scenario is DTensor in distributed training. DTensor holds a local tensor. When an operator is applied to DTensor, ",{"type":16,"tag":170,"props":189,"children":190},{},[191],{"type":22,"value":174},{"type":22,"value":193}," intercepts the operator call, extracts the local tensor to complete the actual computation, and then repackages the result as a DTensor and returns it. In this way, the upper-layer service code can use DTensor in the same way as using a common Tensor, and the details of distributed training are encapsulated in the scheduling logic.",{"type":16,"tag":158,"props":195,"children":197},{"id":196},"_32-msdispatchguard-context-manager",[198],{"type":22,"value":199},"3.2 MsDispatchGuard Context Manager",{"type":16,"tag":24,"props":201,"children":202},{},[203,205,209],{"type":22,"value":204},"Another interception method is MsDispatchGuard. Unlike Tensor subclass scheduling for a specific tensor type, the Guard applies to a code scope, that is, all operators executed in its context are intercepted. You can inherit MsDispatchGuard and rewrite the ",{"type":16,"tag":170,"props":206,"children":207},{},[208],{"type":22,"value":174},{"type":22,"value":210}," method to insert custom logic before and after operator execution. The Guard is used to record operator call logs to assist debugging, insert timing code before and after an operator for performance analysis, or replace the execution behavior of an operator within a specific range. The Guard can be used as a context manager or decorator.",{"type":16,"tag":24,"props":212,"children":213},{},[214,216,220,222,226],{"type":22,"value":215},"Guards can be nested. Multiple Guards are managed by stack, and the inner Guard is executed first. When you call func(*args) in ",{"type":16,"tag":170,"props":217,"children":218},{},[219],{"type":22,"value":174},{"type":22,"value":221},", the framework automatically transfers the execution stream to the next Guard in the stack. The original operator logic is executed only after all Guards are processed. After you exit the Guard scope, the interception behavior automatically returns to the outer state. This mechanism can work properly in both forward propagation and backpropagation and is compatible with the automatic differentiation system of MindSpore. The intermediate operators in backpropagation are also processed by the Guard chain. Tensor subclass scheduling and Guard scheduling can be used independently or together. When both of them exist, the Guard has a higher priority. If the Guard stack is not empty, the Guard chain path is preferentially used. If the Guard stack is empty, the tensor-level ",{"type":16,"tag":170,"props":223,"children":224},{},[225],{"type":22,"value":174},{"type":22,"value":227}," is checked. If neither of them is used, the operator execution follows the original path, and no extra overhead is generated.",{"type":16,"tag":37,"props":229,"children":231},{"id":230},"_4-process-level-thread-level-and-numa-level-core-binding-improving-the-stability-of-large-scale-training-and-the-execution-efficiency-on-the-host",[232],{"type":22,"value":233},"4 Process-Level, Thread-Level, and NUMA-Level Core Binding, Improving the Stability of Large-Scale Training and the Execution Efficiency on the Host",{"type":16,"tag":24,"props":235,"children":236},{},[237],{"type":22,"value":238},"As the scale of LLMs continues to expand, the host side gradually becomes the bottleneck of the overall system efficiency. In multi-device distributed scenarios, if training processes, key threads, and memory accesses are completely dependent on the system's free scheduling, it is likely to cause CPU contention, thread migration, and cross-NUMA memory access, affecting the stability of supply on the device side. To address this issue, MindSpore has continuously evolved its core binding capabilities. In version 2.9, MindSpore provides a comprehensive capability system, including process-level core binding, thread-level core binding, NUMA memory binding, and unified JSON configuration. This system not only helps key threads obtain a more stable and less interfered CPU operating environment, but also makes the configuration and reuse methods clearer in complex deployment scenarios, further enhancing the stability of large-scale training tasks and the overall system efficiency.",{"type":16,"tag":158,"props":240,"children":242},{"id":241},"_41-process-level-core-binding",[243],{"type":22,"value":244},"4.1 Process-Level Core Binding",{"type":16,"tag":24,"props":246,"children":247},{},[248,250,256],{"type":22,"value":249},"MindSpore provides the process-level core binding capability based on ",{"type":16,"tag":59,"props":251,"children":253},{"className":252},[],[254],{"type":22,"value":255},"msrun",{"type":22,"value":257},". This capability defines the CPU range as early as possible during the startup of distributed tasks. In this case, different training processes have relatively independent and stable CPU running space, reducing core overlapping, resource contention, and scheduling jitter between processes. Because core binding is performed at the very beginning of task startup, subsequent thread creation and scheduling will be more stable, laying a foundation for large-scale distributed training.",{"type":16,"tag":158,"props":259,"children":261},{"id":260},"_42-thread-level-core-binding",[262],{"type":22,"value":263},"4.2 Thread-Level Core Binding",{"type":16,"tag":24,"props":265,"children":266},{},[267,269,275,277,283,285,291,292,298,300,306],{"type":22,"value":268},"In addition to process-level core binding, MindSpore further provides the thread-level CPU affinity capability, which allows ",{"type":16,"tag":59,"props":270,"children":272},{"className":271},[],[273],{"type":22,"value":274},"mindspore.runtime.set_cpu_affinity",{"type":22,"value":276}," to bind key working threads in a finer granularity. The key threads include ",{"type":16,"tag":59,"props":278,"children":280},{"className":279},[],[281],{"type":22,"value":282},"main",{"type":22,"value":284},", ",{"type":16,"tag":59,"props":286,"children":288},{"className":287},[],[289],{"type":22,"value":290},"runtime",{"type":22,"value":284},{"type":16,"tag":59,"props":293,"children":295},{"className":294},[],[296],{"type":22,"value":297},"pynative",{"type":22,"value":299},", and ",{"type":16,"tag":59,"props":301,"children":303},{"className":302},[],[304],{"type":22,"value":305},"minddata",{"type":22,"value":307},", which are responsible for compilation and process control, static graph delivery, dynamic graph operator delivery, and data processing, respectively.\nThe core value of this capability is that it no longer treats all threads equally. Instead, it preferentially ensures the CPU stability of key threads, reduces thread migration and interference from common threads, and makes the key path on the host smoother, reducing the idle time on the device caused by waiting for the host to supply resources.",{"type":16,"tag":158,"props":309,"children":311},{"id":310},"_43-memory-binding",[312],{"type":22,"value":313},"4.3 Memory Binding",{"type":16,"tag":24,"props":315,"children":316},{},[317],{"type":22,"value":318},"As servers enter the era of multi-socket CPUs and NUMA architecture, binding threads only to some CPU cores is far from enough. In the NUMA architecture, if a thread frequently accesses the memory on a remote NUMA node, additional memory access latency and bandwidth loss will be introduced.",{"type":16,"tag":24,"props":320,"children":321},{},[322],{"type":22,"value":323},"Therefore, MindSpore extends the core binding capability to the NUMA level. It focuses not only on the core where the thread runs, but also on whether related memory operations are completed on the local NUMA node. This enables the capability to evolve from CPU affinity optimization to collaborative optimization of CPUs and memory, which can systematically reduce the remote memory access overhead in complex server topologies. This capability is especially suitable for scenarios such as large-scale training, data processing, and HyperOffload.",{"type":16,"tag":158,"props":325,"children":327},{"id":326},"_44-improved-json-usability",[328],{"type":22,"value":329},"4.4 Improved JSON Usability",{"type":16,"tag":24,"props":331,"children":332},{},[333],{"type":22,"value":334},"As the core binding capability extends from the process level to the thread level and then to NUMA memory binding, the configuration dimensions continue to increase. If these capabilities are scattered in different parameters, interfaces, and configurations, it will significantly increase the cost for users to understand and use them.",{"type":16,"tag":24,"props":336,"children":337},{},[338],{"type":22,"value":339},"To address this issue, MindSpore provides a unified JSON-based configuration method, integrating the CPU/NUMA affinity into a clearer and more suitable configuration entry for engineering implementation. Users can use JSON to describe the mapping between complex threads and resources, and incorporate the mapping into the deployment process, environment reproduction, and version management system, improving the maintainability and reuse efficiency in complex scenarios.",{"type":16,"tag":69,"props":341,"children":342},{"style":71},[343],{"type":16,"tag":74,"props":344,"children":346},{"src":345,"style":77,"alt":7},"/category/information/version-updates/banner/en/2_9_en/3.jpg",[],{"type":16,"tag":24,"props":348,"children":349},{},[350],{"type":22,"value":351},"From the actual results, the entire MindSpore core binding solution has been verified at the customer's site. In a trillion-level MoE model, when the number of devices is expanded from 512 to 1024, the linearity reaches 99%. The average performance improves by about 10% in the 512-device scenario and by about 18% in the 1024-device scenario. In another group of configurations from 256 devices to 1024 devices, the linearity exceeds 97%, the overall performance improves by 10% to 15%, and the performance step-time jitter is limited within 200 ms.",{"type":16,"tag":24,"props":353,"children":354},{},[355],{"type":22,"value":356},"In real-world production model scenarios, the 4000-scale performance is improved by more than 20%. At another customer's site, the variance of single-step time consumption variance is improved by 38.87% in the 32-device scenario. This indicates that the MindSpore core binding capability not only improves performance, but also significantly improves running stability.",{"type":16,"tag":24,"props":358,"children":359},{},[360,361],{"type":22,"value":132},{"type":16,"tag":85,"props":362,"children":365},{"href":363,"rel":364},"https://www.mindspore.cn/tutorials/en/master/parallel/msrun_launcher.html#process-level-cpu/numa-affinity-configuration",[89],[366],{"type":22,"value":363},{"type":16,"tag":30,"props":368,"children":370},{"id":369},"scientific-computing-suite-enhancement",[371],{"type":22,"value":372},"-- Scientific Computing Suite Enhancement --",{"type":16,"tag":37,"props":374,"children":376},{"id":375},"_5-mindspore-science-intelligent-agent-system-released-covering-the-entire-end-to-end-scientific-research-process",[377],{"type":22,"value":378},"5 MindSpore Science Intelligent Agent System Released, Covering the Entire End-to-End Scientific Research Process",{"type":16,"tag":24,"props":380,"children":381},{},[382],{"type":22,"value":383},"The scientific research process involves stages such as literature reading, hypothesis generation, code writing, experiment trial and error, and optimization. The process is complex and time-consuming. MindSpore scientific computing builds the MindSpore Science intelligent agent system, which agentizes the entire scientific research process. This eliminates the cost of manual coordination between scientific research stages and significantly shortens the research cycle.",{"type":16,"tag":24,"props":385,"children":386},{},[387],{"type":22,"value":388},"The MindSpore Science intelligent agent system consists of two core components: MindSpore Science Skills and MindSpore Science Agent.",{"type":16,"tag":109,"props":390,"children":391},{},[392,397],{"type":16,"tag":113,"props":393,"children":394},{},[395],{"type":22,"value":396},"MindSpore Science Skills is an open-source intelligent agent skill library designed for the entire general scientific research process, covering common skills in the analysis, simulation, and experimentation phases.",{"type":16,"tag":113,"props":398,"children":399},{},[400],{"type":22,"value":401},"MindSpore Science Agent is an open-source scientific intelligent agent designed for the entire scientific research process. It supports sub-agents for hypothesis generation, experiment design, self-modification, and experiment execution, allowing users to quickly build their own AI research assistants.",{"type":16,"tag":24,"props":403,"children":404},{},[405],{"type":22,"value":406},"MindSpore Science Skills 0.1 and MindSpore Science Agent preview are released in the current version.",{"type":16,"tag":69,"props":408,"children":409},{"style":71},[410],{"type":16,"tag":74,"props":411,"children":413},{"src":412,"style":77,"alt":7},"/category/information/version-updates/banner/en/2_9_en/4.jpg",[],{"type":16,"tag":158,"props":415,"children":417},{"id":416},"_51-mindspore-science-skills-01",[418],{"type":22,"value":419},"5.1 MindSpore Science Skills 0.1",{"type":16,"tag":24,"props":421,"children":422},{},[423],{"type":22,"value":424},"MindSpore Science Skills 0.1 has more than 300 built-in scientific research skills, covering core scientific fields such as biomedicine, chemical materials, fluid, PDE equations, geoscience, and electromagnetism.",{"type":16,"tag":109,"props":426,"children":427},{},[428,433,438,443,448],{"type":16,"tag":113,"props":429,"children":430},{},[431],{"type":22,"value":432},"It supports access to industrial-grade HPC software such as VASP and OpenFOAM, with complex configurations, and SOTA tools such as FitDock. It provides built-in processes such as input construction, parameter adjustment, and calculation result analysis, and builds more than 20 software skills that are available to everyone in complex domains, improving the success rate of calling the Agent software.",{"type":16,"tag":113,"props":434,"children":435},{},[436],{"type":22,"value":437},"It covers more than 40 MindSpore and Ascend AI4S top models in fields such as biology, chemistry, fluid, meteorology, and energy, greatly expanding the knowledge boundary of agents. In addition, users can encapsulate their own models into agent skills in minutes, and quickly convert AI4S models into plug-and-play productivity tools.",{"type":16,"tag":113,"props":439,"children":440},{},[441],{"type":22,"value":442},"It streamlines the boundaries between dry and wet experiments, provides skills for operating wet experiment equipment (for example, in electrocatalysis scenarios), and offers reference implementations for the design and execution of wet experiments. This shortens the experiment iteration period and achieves closed-loop scientific research.",{"type":16,"tag":113,"props":444,"children":445},{},[446],{"type":22,"value":447},"It collaborates with top labs such as the lab of the University of Science and Technology of China to accumulate tacit knowledge (including first principle calculation skills for doped materials) in fields such as chemistry and biology, and solidifies complex scientific research tasks into know-how skills, improving the efficiency of agents in solving complex scientific research tasks.",{"type":16,"tag":113,"props":449,"children":450},{},[451],{"type":22,"value":452},"It integrates commonly used literature and data search APIs in the industry, as well as domain-specific Python data analysis and processing tool libraries.",{"type":16,"tag":24,"props":454,"children":455},{},[456],{"type":22,"value":457},"The preceding skills have covered the basic requirements in scientific analysis, simulation, and experiment scenarios, lowering the technical threshold for agents to call across disciplines. MindSpore Science Skills can be seamlessly integrated with intelligent agents such as Hermes Agent, OpenClaw, Claude Code, and JiuwenClaw, and can also be directly connected to users' dedicated AI research assistants.",{"type":16,"tag":24,"props":459,"children":460},{},[461,462],{"type":22,"value":132},{"type":16,"tag":85,"props":463,"children":466},{"href":464,"rel":465},"https://gitcode.com/mindspore-lab/mindscience/tree/master/MindScienceSkills",[89],[467],{"type":22,"value":464},{"type":16,"tag":158,"props":469,"children":471},{"id":470},"_52-mindspore-science-agent-preview",[472],{"type":22,"value":473},"5.2 MindSpore Science Agent Preview",{"type":16,"tag":24,"props":475,"children":476},{},[477],{"type":22,"value":478},"The MindSpore Science Agent preview version has six built-in sub-agents and scientific experiment workflows, and provides scientific research cases in the field of chemistry. The main features of MindSpore Science Agent preview are as follows:",{"type":16,"tag":109,"props":480,"children":481},{},[482,487],{"type":16,"tag":113,"props":483,"children":484},{},[485],{"type":22,"value":486},"It integrates open-source sub-agents in the industry, such as hypothesis generation, experiment design, and experiment execution, and orchestrates the entire scientific experiment workflow based on the multi-agent architecture.",{"type":16,"tag":113,"props":488,"children":489},{},[490],{"type":22,"value":491},"It optimizes the prompts and skills of experiment-related sub-agents, improves the self-correction capability of the experiment execution sub-agent, and reduces the context pollution caused by error messages.",{"type":16,"tag":24,"props":493,"children":494},{},[495],{"type":22,"value":496},"MindSpore Science Agent achieves an accuracy of 77.88% in the Frontier Science chemistry test set, surpassing 70.51% on the Biomni platform released by the Stanford University research team. It has also completed case verification in two scientific research paths: wet experiment and computational simulation. The cases include the complete electrocatalytic wet experiment process for high-throughput synthesis and testing of OER catalysts, and high-precision computational simulation tasks represented by the optimization of perovskite doping structures.",{"type":16,"tag":69,"props":498,"children":499},{"style":71},[500],{"type":16,"tag":74,"props":501,"children":503},{"src":502,"style":77,"alt":7},"/category/information/version-updates/banner/en/2_9_en/5.jpg",[],{"type":16,"tag":24,"props":505,"children":506},{},[507,508],{"type":22,"value":132},{"type":16,"tag":85,"props":509,"children":512},{"href":510,"rel":511},"https://gitcode.com/mindspore-lab/mindscience/tree/master/MindScienceAgent",[89],[513],{"type":22,"value":510},{"type":16,"tag":24,"props":515,"children":516},{},[517],{"type":22,"value":518},"The professional skill libraries in the fields such as chemistry and biology, released in this version, are jointly built with the team led by Professor Jiang Jun of University of Science and Technology of China and the team led by Professor Cao Yang of Sichuan University. The skills of MindSpore and Ascend models integrate the original achievements and ecosystem contributions of many cooperation teams. We would like to express our gratitude to all the co-creation teams for their in-depth guidance and cooperation in the professional fields.",{"title":7,"searchDepth":520,"depth":520,"links":521},4,[522,538],{"id":32,"depth":523,"text":35,"children":524},2,[525,527,528,532],{"id":39,"depth":526,"text":42},3,{"id":94,"depth":526,"text":97},{"id":141,"depth":526,"text":144,"children":529},[530,531],{"id":160,"depth":520,"text":163},{"id":196,"depth":520,"text":199},{"id":230,"depth":526,"text":233,"children":533},[534,535,536,537],{"id":241,"depth":520,"text":244},{"id":260,"depth":520,"text":263},{"id":310,"depth":520,"text":313},{"id":326,"depth":520,"text":329},{"id":369,"depth":523,"text":372,"children":539},[540],{"id":375,"depth":526,"text":378,"children":541},[542,543],{"id":416,"depth":520,"text":419},{"id":470,"depth":520,"text":473},"markdown","content:version-updates:en:2_9_en.md","content","version-updates/en/2_9_en.md","version-updates/en/2_9_en","md",1778707750720]