[{"data":1,"prerenderedAt":141},["ShallowReactive",2],{"content-query-U2CWdcqoxa":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":135,"_id":136,"_source":137,"_file":138,"_stem":139,"_extension":140},"/technology-blogs/en/2595","en",false,"","Idea Sharing (28): Symmetry and AI for Science","Looking at the history of neural networks, we can find that, though inconspicuous, symmetry always plays an important role, which can be highlighted by geometric deep learning.","2023-06-05","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/03/8dd2cce8952241eeb5ead0060423296e.png","technology-blogs","Practices",{"type":15,"children":16,"toc":132},"root",[17,25,31,39,44,64,90,95,100,107,112,117,122,127],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"idea-sharing-28-symmetry-and-ai-for-science",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":28},"p",{},[29],{"type":24,"value":30},"Looking at the history of neural networks, we can find that, though inconspicuous, symmetry always plays an important role, which can be highlighted by geometric deep learning. On the one hand, in the field of conventional computer vision (CV) and natural language processing (NLP), the Transformer-based network architecture has amazed the public in recent years. On the other hand, symmetry, plays an important role as the first principle of natural science. Therefore, we have reason to believe that geometric deep learning will contribute a lot to the combination of AI and science from the perspective of symmetry.",{"type":18,"tag":26,"props":32,"children":33},{},[34],{"type":18,"tag":35,"props":36,"children":38},"img",{"alt":7,"src":37},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/03/277877810b50404f86f1f5459818e27f.png",[],{"type":18,"tag":26,"props":40,"children":41},{},[42],{"type":24,"value":43},"In fact, geometric deep learning has made remarkable achievements in the AI for Science field and will play an increasingly important role. AlphaFold 2, powered by the graph neural network and Transformer, displays a high accuracy in predicting protein structure. In the prediction and generation of small molecular structures, adding extra rigid transformation symmetry (E(3) symmetric group) of the Euclidean space to the graph neural network can significantly improve the calculation precision and reduce the training complexity. Global weather prediction involves convolution on manifolds and coordinate specification transformation as the Earth surface is a two-dimensional sphere. Geometric deep learning provides a systematic theoretical framework for neural network design. In cosmology, time and space are bent due to gravitational force. Geometric deep learning provides a theoretical basis for the combination of the bent Riemannian manifold structure and AI.",{"type":18,"tag":26,"props":45,"children":46},{},[47,49,55,57,62],{"type":24,"value":48},"On the other hand, in basic sciences fields such as condensed physics and quantum physics, the structure has not been deeply combined with AI. Because in these fields, though with comprehensive systematic theoretical knowledge, high-quality data is difficult to obtain. Therefore, how to ",{"type":18,"tag":50,"props":51,"children":52},"strong",{},[53],{"type":24,"value":54},"\"inject\" existing knowledge",{"type":24,"value":56}," into neural networks and ",{"type":18,"tag":50,"props":58,"children":59},{},[60],{"type":24,"value":61},"improve data utilization",{"type":24,"value":63}," is particularly critical.",{"type":18,"tag":26,"props":65,"children":66},{},[67,69,74,76,81,83,88],{"type":24,"value":68},"Group equivariant neural networks show surprising advantages in improving data utilization. For example, in a classification task of three-dimensional Tetris, there are eight structures in total. Traditionally, each type of structure data needs a large amount of spatial rotation for data augmentation, which undoubtedly increases the data volume and training complexity, and the prediction accuracy is not guaranteed. In equivariant neural networks such as E(3), each structure requires only one piece of data, or even less, if two structures can be associated by an E(3) transformation. In addition to greatly reducing the ",{"type":18,"tag":50,"props":70,"children":71},{},[72],{"type":24,"value":73},"data demand",{"type":24,"value":75},", the ",{"type":18,"tag":50,"props":77,"children":78},{},[79],{"type":24,"value":80},"prediction accuracy",{"type":24,"value":82}," can be ensured theoretically, and the neural network has better ",{"type":18,"tag":50,"props":84,"children":85},{},[86],{"type":24,"value":87},"explanatory and expressive capability",{"type":24,"value":89},".",{"type":18,"tag":26,"props":91,"children":92},{},[93],{"type":24,"value":94},"Symmetry is profound and fundamental among all existing knowledge. It is also the key to explaining the laws of nature. For example, the space and time of high-energy particles features Lorentz group symmetry (SO (1, 3) group). The Lorentz group equivariant neural network can play an important role in high-energy physics in the future. Modern physics is described by field theory, whose mathematical basis is also differential geometry and fiber bundle. It is foreseeable that deep geometry learning will lead the combination of modern physics and AI.",{"type":18,"tag":26,"props":96,"children":97},{},[98],{"type":24,"value":99},"For the AI for Science foundation model, there's a simple and bold imagination based on geometric deep learning. First, we can learn the symmetry of the input system through a neural network, for example, we can learn the Lie algebra of symmetric groups [3]. Then, use a mechanism to control the strength of symmetry, so that the network can choose which symmetry group to use. In this way, the network can adapt to the approximate symmetry of the input system. Of course, probably future foundation models will not be such simple. There are still many problems to be solved. We look forward to witnessing further progress.",{"type":18,"tag":26,"props":101,"children":102},{},[103],{"type":18,"tag":35,"props":104,"children":106},{"alt":7,"src":105},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/07/03/b914c9b3d89d46f9a686b04db16dd535.png",[],{"type":18,"tag":26,"props":108,"children":109},{},[110],{"type":24,"value":111},"Lie algebra of the SO(2) group of L-conv learning data",{"type":18,"tag":26,"props":113,"children":114},{},[115],{"type":24,"value":116},"References:",{"type":18,"tag":26,"props":118,"children":119},{},[120],{"type":24,"value":121},"[1] Bronstein, Michael M., et al. \"Geometric deep learning: Grids, groups, graphs, geodesics, and gauges.\" arXiv preprint arXiv:2104.13478 (2021).",{"type":18,"tag":26,"props":123,"children":124},{},[125],{"type":24,"value":126},"[2] Weiler, Maurice, et al. \"Coordinate Independent Convolutional Networks--Isometry and Gauge Equivariant Convolutions on Riemannian Manifolds.\" arXiv preprint arXiv:2106.06020 (2021).",{"type":18,"tag":26,"props":128,"children":129},{},[130],{"type":24,"value":131},"[3] Dehmamy N, Walters R, Liu Y, et al. Automatic symmetry discovery with lie algebra convolutional network[J]. Advances in Neural Information Processing Systems, 2021, 34: 2503-2515.",{"title":7,"searchDepth":133,"depth":133,"links":134},4,[],"markdown","content:technology-blogs:en:2595.md","content","technology-blogs/en/2595.md","technology-blogs/en/2595","md",1776506106701]