[{"data":1,"prerenderedAt":745},["ShallowReactive",2],{"content-query-POPA7f6Fos":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":739,"_id":740,"_source":741,"_file":742,"_stem":743,"_extension":744},"/version-updates/en/1080","en",false,"","Amplify Your Development Efficiency with MindSpore 1.6","The usability and development efficiency has been improved.","2022-03-11","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/15/ab54bba134d64f8e9f7fec2f657ebded.png","version-updates",{"type":14,"children":15,"toc":736},"root",[16,24,30,39,44,49,54,59,64,72,77,82,87,92,100,105,112,117,122,127,134,142,147,152,159,167,172,180,185,193,198,214,222,227,234,239,244,249,261,269,274,286,294,299,304,312,317,325,330,342,350,355,362,367,375,380,387,395,400,407,412,424,436,444,449,457,462,469,477,482,490,495,502,513,521,529,534,541,546,554,559,566,578,586,591,596,603,608,615,620,627,639,647,652,657,662,669,674,681,686,697,702,707,712,724,729],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"amplify-your-development-efficiency-with-mindspore-16",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"Version 1.6 of MindSpore, the all-scenario AI framework, has been released. In this version, the usability and development efficiency has been improved, and the control flow scenario has been optimized for greater efficiency with added support for side effect training. The new version also includes the following releases: MindSpore Graph Learning, an efficient and easy-to-use graph learning framework; MindSpore Reinforcement, a high-performance and scalable reinforcement learning framework; MindConverter, a tool for third-party framework model migration; and MindSpore Dev ToolKit, a development kit that enables users to quickly experience MindSpore. MindSpore 1.6 also sees upgrades to the operator customization capability for more efficient addition of operators, improvements to MindQuantum to help developers quickly get started with quantum simulation, and upgrades to the training and inference performance of MindSpore Lite. Now, let's look at the key features of MindSpore 1.6.",{"type":17,"tag":25,"props":31,"children":32},{},[33],{"type":17,"tag":34,"props":35,"children":36},"strong",{},[37],{"type":23,"value":38},"1. Improved Usability for Efficient Development",{"type":17,"tag":25,"props":40,"children":41},{},[42],{"type":23,"value":43},"Based on developer feedback, we have rectified API issues reported by developers along with a collection of optimizations, and developed a series of tutorials to help developers get started.",{"type":17,"tag":25,"props":45,"children":46},{},[47],{"type":23,"value":48},"In terms of debugging and optimization, we have made the following improvements to boost development efficiency:",{"type":17,"tag":25,"props":50,"children":51},{},[52],{"type":23,"value":53},"1.Problematic code in static graph mode can now be printed, and the accuracy of error messages has been improved.",{"type":17,"tag":25,"props":55,"children":56},{},[57],{"type":23,"value":58},"2.Additional features have been added to MindInsight, including one-click collection of cluster performance data, parallelism policy analysis, and graph-code visualized tuning, helping to accelerate performance and accuracy tuning.",{"type":17,"tag":25,"props":60,"children":61},{},[62],{"type":23,"value":63},"3.ModelZoo provides more than 300 cross-device deployable models that can be used both online and offline, covering CV, NLP, and other fields to meet the requirements of various industries. In MindSpore 1.6, models such as YOLOv5 are reconstructed using a new and more efficient syntax.",{"type":17,"tag":25,"props":65,"children":66},{},[67],{"type":17,"tag":34,"props":68,"children":69},{},[70],{"type":23,"value":71},"2. Control Flow Scenario Optimized to Support Side Effect Training",{"type":17,"tag":25,"props":73,"children":74},{},[75],{"type":23,"value":76},"In earlier versions of MindSpore, subgraphs may be replicated unnecessarily in the control flow scenario. This deteriorates the network performance and causes incorrect training results when side effect operators are involved. In MindSpore 1.6.1, the intermediate representation (IR) design of control flows is reconstructed to eliminate unnecessary graph replication. Other aspects in the control flow scenario are also optimized as listed below.",{"type":17,"tag":25,"props":78,"children":79},{},[80],{"type":23,"value":81},"1.Side-effect operators such as Assign can now be used in training.",{"type":17,"tag":25,"props":83,"children":84},{},[85],{"type":23,"value":86},"2.The number of control flow subgraphs has been optimized. The backward network can now directly use the calculation results of the forward graph operators.",{"type":17,"tag":25,"props":88,"children":89},{},[90],{"type":23,"value":91},"Comparison of subgraph number and execution performance of AirNet before and after optimization:",{"type":17,"tag":25,"props":93,"children":94},{},[95],{"type":17,"tag":96,"props":97,"children":99},"img",{"alt":7,"src":98},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/45a58b3926af4f92a922ca584d09ef1b.png",[],{"type":17,"tag":25,"props":101,"children":102},{},[103],{"type":23,"value":104},"Comparison of subgraph number and execution performance of BFGS before and after optimization:",{"type":17,"tag":25,"props":106,"children":107},{},[108],{"type":17,"tag":96,"props":109,"children":111},{"alt":7,"src":110},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/0ccca6b98edf4f12af9def471b7c6e13.png",[],{"type":17,"tag":25,"props":113,"children":114},{},[115],{"type":23,"value":116},"3. Concurrent execution of subgraphs without data dependencies is now supported. In addition, the execution processes of empty subgraphs have been optimized to improve the overall execution performance in the control flow scenario.",{"type":17,"tag":25,"props":118,"children":119},{},[120],{"type":23,"value":121},"For example, although the number of subgraphs of MAPPO (Agent3) does not change after the optimization, the final network execution performance is improved from 2.5s/epoch to 1.8s/epoch.",{"type":17,"tag":25,"props":123,"children":124},{},[125],{"type":23,"value":126},"Comparison of subgraph number and execution performance of MAPPO (Agent3) before and after optimization:",{"type":17,"tag":25,"props":128,"children":129},{},[130],{"type":17,"tag":96,"props":131,"children":133},{"alt":7,"src":132},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/0c91be5d3bd74d748fa9298242fb3bb3.png",[],{"type":17,"tag":25,"props":135,"children":136},{},[137],{"type":17,"tag":34,"props":138,"children":139},{},[140],{"type":23,"value":141},"3. MindSpore Graph Learning: Formula as Code, Accelerating Training by 3 to 4 Times",{"type":17,"tag":25,"props":143,"children":144},{},[145],{"type":23,"value":146},"Graph neural networks (GNNs) are widely used in various scenarios, such as recommender systems, financial risk control and drug molecular analysis. However, the calculation of a GNN is usually complex and time-consuming. Therefore, an efficient and scalable GNN system is urgently needed.",{"type":17,"tag":25,"props":148,"children":149},{},[150],{"type":23,"value":151},"To meet this requirement, the MindSpore team worked in conjunction with James Cheng's team at The Chinese University of Hong Kong to jointly develop MindSpore Graph Learning, a graph learning framework featuring ease of use, high efficiency, and diversity.",{"type":17,"tag":25,"props":153,"children":154},{},[155],{"type":17,"tag":96,"props":156,"children":158},{"alt":7,"src":157},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/64cbf35e409648328bd83c1d30dd5892.png",[],{"type":17,"tag":25,"props":160,"children":161},{},[162],{"type":17,"tag":34,"props":163,"children":164},{},[165],{"type":23,"value":166},"3.1 Ease of Use: Formula as Code",{"type":17,"tag":25,"props":168,"children":169},{},[170],{"type":23,"value":171},"MindSpore Graph Learning directly maps formulas to code, helping you quickly implement custom GNN algorithms and operations without having to encapsulate any functions.",{"type":17,"tag":25,"props":173,"children":174},{},[175],{"type":17,"tag":34,"props":176,"children":177},{},[178],{"type":23,"value":179},"3.2 High Efficiency: Training Accelerated by 3 to 4 Times.",{"type":17,"tag":25,"props":181,"children":182},{},[183],{"type":23,"value":184},"MindSpore Graph Learning combines the features of MindSpore graph kernel fusion and auto kernel generator (AKG) to automatically identify the specific execution pattern of graph neural network tasks for fusion and kernel-level optimization, including the fusion of existing operators and new combined operators in the existing framework. The performance of GNNs is improved by 3 to 4 times compared with that of the existing popular frameworks.",{"type":17,"tag":25,"props":186,"children":187},{},[188],{"type":17,"tag":34,"props":189,"children":190},{},[191],{"type":23,"value":192},"3.3 High Diversity: Covers Common Graph Learning Networks",{"type":17,"tag":25,"props":194,"children":195},{},[196],{"type":23,"value":197},"The MindSpore Graph Learning framework has 13 built-in graph network learning models, covering application networks involving homogeneous and heterogeneous graphs and random walk.",{"type":17,"tag":25,"props":199,"children":200},{},[201,203,212],{"type":23,"value":202},"For details, see ",{"type":17,"tag":204,"props":205,"children":209},"a",{"href":206,"rel":207},"https://gitee.com/mindspore/graphlearning/tree/research/model%5C_zoo",[208],"nofollow",[210],{"type":23,"value":211},"https://gitee.com/mindspore/graphlearning/tree/research/model\\_zoo",{"type":23,"value":213},".",{"type":17,"tag":25,"props":215,"children":216},{},[217],{"type":17,"tag":34,"props":218,"children":219},{},[220],{"type":23,"value":221},"4. MindSpore Reinforcement: a High-performance and Scalable Reinforcement Learning Framework",{"type":17,"tag":25,"props":223,"children":224},{},[225],{"type":23,"value":226},"Reinforcement learning (RL) is one of the hot topics in the AI field over the last few years. MindSpore 1.6 sees the launch of MindSpore Reinforcement, an independent reinforcement learning framework. Python APIs and the separation of algorithm and execution in MindSpore Reinforcement enable the framework to be programmable and scalable, and help provide developers with a brand-new development experience.",{"type":17,"tag":25,"props":228,"children":229},{},[230],{"type":17,"tag":96,"props":231,"children":233},{"alt":7,"src":232},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/837b06ff2ad54e91a4618c740a44ca5e.png",[],{"type":17,"tag":25,"props":235,"children":236},{},[237],{"type":23,"value":238},"MindSpore Reinforcement 0.2 provides a set of Python APIs for reinforcement learning. The simple and clear API abstractions facilitate efficient algorithm development and improves module reusability. Some classic reinforcement learning algorithms come included, such as DQN and PPO, with more coming in the future. You can use the algorithms directly, or develop your own single- or multi-agent reinforcement learning algorithms using the Python APIs.",{"type":17,"tag":25,"props":240,"children":241},{},[242],{"type":23,"value":243},"In MindSpore Reinforcement, algorithm expression is separated from compilation and execution by design. You only need to focus on the Python implementation of the algorithm logic, and can benefit from the powerful MindSpore capabilities of compilation optimization and heterogeneous hardware acceleration.",{"type":17,"tag":25,"props":245,"children":246},{},[247],{"type":23,"value":248},"MindSpore Reinforcement supports multi-device computing using GPUs, CPUs, and Ascend AI Processors. Currently, only single-server training is supported. However, powerful multi-agent distributed training capabilities and features are planned for future versions.",{"type":17,"tag":25,"props":250,"children":251},{},[252,254,260],{"type":23,"value":253},"MindSpore Reinforcement documents: ",{"type":17,"tag":204,"props":255,"children":258},{"href":256,"rel":257},"https://www.mindspore.cn/reinforcement/en",[208],[259],{"type":23,"value":256},{"type":23,"value":213},{"type":17,"tag":25,"props":262,"children":263},{},[264],{"type":17,"tag":34,"props":265,"children":266},{},[267],{"type":23,"value":268},"5. Operator Customization Capability Upgraded with Unified Custom Interface",{"type":17,"tag":25,"props":270,"children":271},{},[272],{"type":23,"value":273},"With the iterative development of AI models, the built-in static operator library of MindSpore may fail to meet user requirements. To provide a better operator customization experience, MindSpore 1.6 fully upgrades the custom operator capability and delivers Custom, a unified operator development interface for multiple platforms, including GPUs, CPUs, and the Ascend AI Processors. This helps users quickly define and use different types of custom operators in MindSpore for quick verification, real-time compilation, third-party operator access, and other requirements.",{"type":17,"tag":25,"props":275,"children":276},{},[277,279,285],{"type":23,"value":278},"Custom-based operators: ",{"type":17,"tag":204,"props":280,"children":283},{"href":281,"rel":282},"https://www.mindspore.cn/docs/programming_guide/en/r1.6/custom_operator_custom.html",[208],[284],{"type":23,"value":281},{"type":23,"value":213},{"type":17,"tag":25,"props":287,"children":288},{},[289],{"type":17,"tag":34,"props":290,"children":291},{},[292],{"type":23,"value":293},"5.1 Unified Operator Development Interface for Multiple Scenarios and Platforms",{"type":17,"tag":25,"props":295,"children":296},{},[297],{"type":23,"value":298},"Custom unifies the interfaces for operator customization. It supports the following modes: JIT-based operator compiler mode, AOT mode for optimal performance, and pyfunc mode for quick verification.",{"type":17,"tag":25,"props":300,"children":301},{},[302],{"type":23,"value":303},"In addition, Custom provides a consistent experience for different platforms, including CPUs, GPUs, Ascend AI Processors, and AI CPUs. This reduces the learning costs of operator development.",{"type":17,"tag":25,"props":305,"children":306},{},[307],{"type":17,"tag":34,"props":308,"children":309},{},[310],{"type":23,"value":311},"5.2 One-click Custom Operator Access",{"type":17,"tag":25,"props":313,"children":314},{},[315],{"type":23,"value":316},"With Custom, you can quickly define a custom operator using a Python function as a part of the network expression. There is no need to modify and recompile the source code of MindSpore. Registration information can be automatically created to enable one-click custom operator access.",{"type":17,"tag":25,"props":318,"children":319},{},[320],{"type":17,"tag":34,"props":321,"children":322},{},[323],{"type":23,"value":324},"5.3 AI CPU: a New Type of Custom Operator",{"type":17,"tag":25,"props":326,"children":327},{},[328],{"type":23,"value":329},"Operators of the AI CPU type are added to MindSpore 1.6. This type of operators adopts the AOT mode for compilation and is used on ARM platforms. AI CPU operators can be quickly deployed on mainstream embedded devices. Compared to TBE operators, AI CPU operators are good at logic operations. This operator type is ideal for operators that are difficult to vectorize.",{"type":17,"tag":25,"props":331,"children":332},{},[333,335,341],{"type":23,"value":334},"Defining custom operator of the AI CPU type: ",{"type":17,"tag":204,"props":336,"children":338},{"href":337},"#aicpu",[339],{"type":23,"value":340},"https://www.mindspore.cn/docs/programming_guide/en/r1.6/custom_operator_custom.html#aicpu",{"type":23,"value":213},{"type":17,"tag":25,"props":343,"children":344},{},[345],{"type":17,"tag":34,"props":346,"children":347},{},[348],{"type":23,"value":349},"6. MindConverter: One-click Equivalent Migration of Third-party Framework Models",{"type":17,"tag":25,"props":351,"children":352},{},[353],{"type":23,"value":354},"It would be a great loss if an existing mainstream model cannot be reused after switching from a third-party framework to MindSpore. Currently, a large number of open source models are implemented based on PyTorch or TensorFlow, but with MindConverter, you can quickly migrate these mainstream models to MindSpore.",{"type":17,"tag":25,"props":356,"children":357},{},[358],{"type":17,"tag":96,"props":359,"children":361},{"alt":7,"src":360},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/741ed4e5aaba4821a20199ecf2f9c43e.png",[],{"type":17,"tag":25,"props":363,"children":364},{},[365],{"type":23,"value":366},"MindConverter converts IRs of mainstream AI frameworks into MindSpore IRs. The generated model can be used for inference or retraining, and the model script has good readability. This model migration tool offers the following conversion modes:",{"type":17,"tag":25,"props":368,"children":369},{},[370],{"type":17,"tag":34,"props":371,"children":372},{},[373],{"type":23,"value":374},"6.1 ONNX IR-based Conversion",{"type":17,"tag":25,"props":376,"children":377},{},[378],{"type":23,"value":379},"Most AI frameworks support the export of models to ONNX, an open model definition format. With the universality of ONNX, MindConverter can migrate models from multiple AI frameworks to MindSpore. Support for ResNet, RegNet, HRNet, DeepLabV3, and YOLO series models has been proven.",{"type":17,"tag":25,"props":381,"children":382},{},[383],{"type":17,"tag":96,"props":384,"children":386},{"alt":7,"src":385},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/e3d190bc28fc4bc0b20c6accdcab8ed0.png",[],{"type":17,"tag":25,"props":388,"children":389},{},[390],{"type":17,"tag":34,"props":391,"children":392},{},[393],{"type":23,"value":394},"6.2 TorchScript IR-based Conversion",{"type":17,"tag":25,"props":396,"children":397},{},[398],{"type":23,"value":399},"TorchScript is the IR of a PyTorch model. With the generalization of the TorchScript IR, MindConverter can migrate most PyTorch models. More than 200 pre-trained models of HuggingFace Transformer have been verified and successfully migrated.",{"type":17,"tag":25,"props":401,"children":402},{},[403],{"type":17,"tag":96,"props":404,"children":406},{"alt":7,"src":405},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/9abe1b82cde34b1a953a7092e7f83dba.png",[],{"type":17,"tag":25,"props":408,"children":409},{},[410],{"type":23,"value":411},"Besides the existing models, feel free to explore the conversion and verification of more models on your own. For more information about MindConverter, please refer to its official documentation. If you have anything to share or encounter any issues when using MindConverter, please feel free to post them in the MindSpore community.",{"type":17,"tag":25,"props":413,"children":414},{},[415,417,423],{"type":23,"value":416},"Migrating From Third Party Frameworks With MindConverter: ",{"type":17,"tag":204,"props":418,"children":421},{"href":419,"rel":420},"https://www.mindspore.cn/mindinsight/docs/en/r1.6/migrate_3rd_scripts_mindconverter.html",[208],[422],{"type":23,"value":419},{"type":23,"value":213},{"type":17,"tag":25,"props":425,"children":426},{},[427,429,435],{"type":23,"value":428},"Submit issues to the MindSpore community: ",{"type":17,"tag":204,"props":430,"children":433},{"href":431,"rel":432},"https://gitee.com/mindspore/mindinsight/issues",[208],[434],{"type":23,"value":431},{"type":23,"value":213},{"type":17,"tag":25,"props":437,"children":438},{},[439],{"type":17,"tag":34,"props":440,"children":441},{},[442],{"type":23,"value":443},"7. Quick MindSpore Experience with MindSpore Dev ToolKit",{"type":17,"tag":25,"props":445,"children":446},{},[447],{"type":23,"value":448},"MindSpore provides various powerful functions, but how can I quickly get started and try it out? MindSpore Dev ToolKit is the answer. This development kit provides functions such as operating management, intelligent knowledge search, and intelligent code completion, and is dedicated to enabling all users to learn how to use AI regardless of the operating environment, and shifting the focus of AI back to algorithms.",{"type":17,"tag":25,"props":450,"children":451},{},[452],{"type":17,"tag":34,"props":453,"children":454},{},[455],{"type":23,"value":456},"7.1 Environment Setup Within Minutes and One-Click Environment Management",{"type":17,"tag":25,"props":458,"children":459},{},[460],{"type":23,"value":461},"MindSpore Dev ToolKit provides a Conda-based refined management method to allow users to quickly install MindSpore, including its dependencies, and deploy best practices in an independent environment. This feature is compatible with the Ascend AI Processor, CPU, and GPU platforms and internal test data shows that 80% of users who do not have expert-level AI knowledge can complete environment configuration and start running algorithms within as little as 5 minutes.",{"type":17,"tag":25,"props":463,"children":464},{},[465],{"type":17,"tag":96,"props":466,"children":468},{"alt":7,"src":467},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/888ef4e80c6040a0898268cc6844bea1.png",[],{"type":17,"tag":25,"props":470,"children":471},{},[472],{"type":17,"tag":34,"props":473,"children":474},{},[475],{"type":23,"value":476},"7.2 Intelligent and Immersive MindSpore Knowledge Search for Pressure-free Ecosystem Access",{"type":17,"tag":25,"props":478,"children":479},{},[480],{"type":23,"value":481},"Based on capabilities such as semantic search, MindSpore Dev Toolkit provides comprehensive MindSpore knowledge search capabilities. PyTorch or TensorFlow users can query an operator to quickly locate the corresponding implementation in MindSpore with detailed documents. Switching to the MindSpore ecosystem has never been so easy.",{"type":17,"tag":25,"props":483,"children":484},{},[485],{"type":17,"tag":34,"props":486,"children":487},{},[488],{"type":23,"value":489},"7.3 30% Fewer Keystrokes with Deep Learning-based Intelligent Code Completion",{"type":17,"tag":25,"props":491,"children":492},{},[493],{"type":23,"value":494},"The intelligent code completion model of MindSpore Dev Toolkit is implemented based on best practice datasets such as ModelZoo. Real-time hints are provided for code related to the MindSpore framework with an accuracy of up to 80%, reducing the number of keystrokes by over 30% according to internal test data.",{"type":17,"tag":25,"props":496,"children":497},{},[498],{"type":17,"tag":96,"props":499,"children":501},{"alt":7,"src":500},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/4177a806d50443af82ce4af5e698702e.png",[],{"type":17,"tag":25,"props":503,"children":504},{},[505,506,512],{"type":23,"value":202},{"type":17,"tag":204,"props":507,"children":510},{"href":508,"rel":509},"https://gitee.com/mindspore/ide-plugin",[208],[511],{"type":23,"value":508},{"type":23,"value":213},{"type":17,"tag":25,"props":514,"children":515},{},[516],{"type":17,"tag":34,"props":517,"children":518},{},[519],{"type":23,"value":520},"8. MindSpore Lite Continuously Improves Inference Performance",{"type":17,"tag":25,"props":522,"children":523},{},[524],{"type":17,"tag":34,"props":525,"children":526},{},[527],{"type":23,"value":528},"8.1 Heterogeneous Parallel Computing Surpercharging Inference",{"type":17,"tag":25,"props":530,"children":531},{},[532],{"type":23,"value":533},"MindSpore 1.6 sees the addition of heterogeneous parallel computing for MindSpore Lite. This function detects heterogeneous hardware capabilities and enables multiple underlying hardware components to perform parallel inference, unleashing the full power of a device's limited hardware resources for higher inference efficiency.",{"type":17,"tag":25,"props":535,"children":536},{},[537],{"type":17,"tag":96,"props":538,"children":540},{"alt":7,"src":539},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/886b7ce62bde45f9b39479f989c38867.png",[],{"type":17,"tag":25,"props":542,"children":543},{},[544],{"type":23,"value":545},"MindSpore Lite currently supports heterogeneous parallel processing of GPUs and CPUs, with MobileNetV1 network test data showing a performance improvement of 5%.",{"type":17,"tag":25,"props":547,"children":548},{},[549],{"type":17,"tag":34,"props":550,"children":551},{},[552],{"type":23,"value":553},"8.2 GPU Inference Performance Optimized Through OpenGL Texture Data Support",{"type":17,"tag":25,"props":555,"children":556},{},[557],{"type":23,"value":558},"With MindSpore 1.6, MindSpore Lite now supports the input and output of OpenGL texture data for inference. In this way, data copy operations between the CPU and GPU are reduced during end-to-end inference. Compared with the previous version of MindSpore, the number of memory copy operations between devices is reduced by four.",{"type":17,"tag":25,"props":560,"children":561},{},[562],{"type":17,"tag":96,"props":563,"children":565},{"alt":7,"src":564},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/7369443dcf8b458099202ad1ad5aabeb.png",[],{"type":17,"tag":25,"props":567,"children":568},{},[569,571,577],{"type":23,"value":570},"OpenGL Texture Data Input: ",{"type":17,"tag":204,"props":572,"children":574},{"href":573},"#opengl-texture-data-input",[575],{"type":23,"value":576},"https://www.mindspore.cn/lite/docs/en/r1.6/use/runtime_cpp.html#opengl-texture-data-input",{"type":23,"value":213},{"type":17,"tag":25,"props":579,"children":580},{},[581],{"type":17,"tag":34,"props":582,"children":583},{},[584],{"type":23,"value":585},"9. MindQuantum: Getting Started with Quantum Simulation and Quantum Machine Learning",{"type":17,"tag":25,"props":587,"children":588},{},[589],{"type":23,"value":590},"MindQuantum 0.5 comes with Simulator, an independent quantum simulation module. Simulator can quickly simulate the evolution of custom quantum circuits and sample the quantum states, allowing you to design and verify your own quantum algorithms in MindQuantum with ease. MindQuantum also includes modules for displaying quantum circuits and quantum state sampling.",{"type":17,"tag":25,"props":592,"children":593},{},[594],{"type":23,"value":595},"Quantum circuit visualization",{"type":17,"tag":25,"props":597,"children":598},{},[599],{"type":17,"tag":96,"props":600,"children":602},{"alt":7,"src":601},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/9ec3aee9f3c24f3f82e0f35e19ccebbe.png",[],{"type":17,"tag":25,"props":604,"children":605},{},[606],{"type":23,"value":607},"Simulator and the circuit sampling function",{"type":17,"tag":25,"props":609,"children":610},{},[611],{"type":17,"tag":96,"props":612,"children":614},{"alt":7,"src":613},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/db15492812d24f9bb6201b8229f441ce.png",[],{"type":17,"tag":25,"props":616,"children":617},{},[618],{"type":23,"value":619},"In MindSpore 1.6, support for multiple quantum neural network operators has been added to help you make breakthroughs in quantum AI algorithms. You can quickly develop hybrid quantum-classical machine learning models using these operators.",{"type":17,"tag":25,"props":621,"children":622},{},[623],{"type":17,"tag":96,"props":624,"children":626},{"alt":7,"src":625},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/522019e4b0c1427d957e5d1774d3c85f.png",[],{"type":17,"tag":25,"props":628,"children":629},{},[630,632,638],{"type":23,"value":631},"MindQuantum document: ",{"type":17,"tag":204,"props":633,"children":636},{"href":634,"rel":635},"https://www.mindspore.cn/mindquantum/docs/en/r0.5/index.html",[208],[637],{"type":23,"value":634},{"type":23,"value":213},{"type":17,"tag":25,"props":640,"children":641},{},[642],{"type":17,"tag":34,"props":643,"children":644},{},[645],{"type":23,"value":646},"10. MindScience Boosts Protein Structure Prediction Performance by Up To 3 Times",{"type":17,"tag":25,"props":648,"children":649},{},[650],{"type":23,"value":651},"MindScience adds an open source protein structure prediction and inference tool based on AlphaFold2 [1]. The tool is jointly launched by the MindSpore team [2], Changping Lab, Biomedical Pioneering Innovation Center, College of Chemistry and Molecular Engineering of Peking University, and Advanced Intelligent Molecular Modeling Group of Shenzhen Bay Laboratory. The tool boosts the end-to-end performance of MindSpore running on a single Ascend 910 AI Processor to 2 or even 3 times higher than that of AlphaFold2.",{"type":17,"tag":25,"props":653,"children":654},{},[655],{"type":23,"value":656},"This tool can calculate the structure of a protein with a sequence length of more than 2000 amino acids, which covers more than 99% of protein sequences [3]. Compared to AlphaFold2, the tool is more advanced in the multiple sequence comparison stage. It adopts MMseqs2 for sequence retrieval [4], and its end-to-end computing speed is two to three times higher than AlphaFold2.",{"type":17,"tag":25,"props":658,"children":659},{},[660],{"type":23,"value":661},"Accuracy comparison between the MindSpore model and AlphaFold2 model:",{"type":17,"tag":25,"props":663,"children":664},{},[665],{"type":17,"tag":96,"props":666,"children":668},{"alt":7,"src":667},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/faead57304574654afce9138e6d67aab.png",[],{"type":17,"tag":25,"props":670,"children":671},{},[672],{"type":23,"value":673},"T1079 structure prediction result by MindSpore:",{"type":17,"tag":25,"props":675,"children":676},{},[677],{"type":17,"tag":96,"props":678,"children":680},{"alt":7,"src":679},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/5bc19e7614c344909ee7cd2be685c16c.png",[],{"type":17,"tag":25,"props":682,"children":683},{},[684],{"type":23,"value":685},"Furthermore, the joint team plans to release a full-stack solution (algorithm + software + hardware) that is innovative, independent, and controllable in the future to address protein structure prediction and folding challenges.",{"type":17,"tag":25,"props":687,"children":688},{},[689,690,696],{"type":23,"value":202},{"type":17,"tag":204,"props":691,"children":694},{"href":692,"rel":693},"https://gitee.com/mindspore/mindscience/tree/master/MindSPONGE/mindsponge",[208],[695],{"type":23,"value":692},{"type":23,"value":213},{"type":17,"tag":25,"props":698,"children":699},{},[700],{"type":23,"value":701},"References",{"type":17,"tag":25,"props":703,"children":704},{},[705],{"type":23,"value":706},"[1] Jumper J, Evans R, Pritzel A, et al. Applying and improving AlphaFold at CASP14[J]. Proteins: Structure, Function, and Bioinformatics, 2021",{"type":17,"tag":25,"props":708,"children":709},{},[710],{"type":23,"value":711},"[2] Chen L. Deep Learning and Practice with MindSpore[M]. Springer Nature, 2021.",{"type":17,"tag":25,"props":713,"children":714},{},[715,717],{"type":23,"value":716},"[3] ",{"type":17,"tag":204,"props":718,"children":721},{"href":719,"rel":720},"https://ftp.uniprot.org/pub/databases/uniprot/previous%5C_releases/release-2021%5C_02/knowledgebase/UniProtKB%5C_TrEMBL-relstat.html",[208],[722],{"type":23,"value":723},"https://ftp.uniprot.org/pub/databases/uniprot/previous\\_releases/release-2021\\_02/knowledgebase/UniProtKB\\_TrEMBL-relstat.html",{"type":17,"tag":25,"props":725,"children":726},{},[727],{"type":23,"value":728},"[4] Mirdita M, Ovchinnikov S, Steinegger M. ColabFold-Making protein folding accessible to all[J]. BioRxiv, 2021.",{"type":17,"tag":25,"props":730,"children":731},{},[732],{"type":17,"tag":96,"props":733,"children":735},{"alt":7,"src":734},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2022/03/18/22e5eae693dd4d579d22b52b1f728148.png",[],{"title":7,"searchDepth":737,"depth":737,"links":738},4,[],"markdown","content:version-updates:en:1080.md","content","version-updates/en/1080.md","version-updates/en/1080","md",1776506142786]