{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# PhyMPGN: Physics-encoded Message Passing Graph Network for spatiotemporal PDE systems\n", "\n", "[![DownloadNotebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/resource/_static/logo_notebook_en.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.5.0/mindflow/en/data_mechanism_fusion/mindspore_phympgn.ipynb) [![DownloadCode](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/resource/_static/logo_download_code_en.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.5.0/mindflow/en/data_mechanism_fusion/mindspore_phympgn.py) [![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.5.0/docs/mindflow/docs/source_en/data_mechanism_fusion/phympgn.ipynb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Complex dynamical systems governed by partial differential equations (PDEs) exist in a wide variety of disciplines. Recent progresses have demonstrated grand benefits of data-driven neural-based models for predicting spatiotemporal dynamics.\n", "\n", "Physics-encoded Message Passing Graph Network (PhyMPGN) is capable to model spatiotemporal PDE systems on irregular meshes given small training datasets. Specifically:\n", "\n", "- A physics-encoded grapph learning model with the message-passing mechanism is proposed, where the temporal marching is realized via a second-order numerical integrator (e.g. Runge-Kutta scheme)\n", "- Considering the universality of diffusion processes in physical phenomena, a learnable Laplace Block is designed, which encodes the discrete Laplace-Beltrami operator\n", "- A novel padding strategy to encode different types of BCs into the learning model is proposed.\n", "\n", "Paper link: [https://arxiv.org/abs/2410.01337](https://gitee.com/link?target=https%3A%2F%2Farxiv.org%2Fabs%2F2410.01337)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's consider complex physical systems, governed by spatiotemporal PDEs in the general form:\n", "\n", "$$\n", "\\begin{equation}\n", "\\dot {\\boldsymbol {u}}(\\boldsymbol x, t) = \\boldsymbol F (t, \\boldsymbol x, \\boldsymbol u, \\nabla \\boldsymbol u, \\Delta \\boldsymbol u, \\dots)\n", "\\end{equation}\n", "$$\n", "\n", "where $\\boldsymbol u(\\boldsymbol x, y) \\in \\mathbb{R}^m$ is the vector of state variable with $m$ components,such as velocity, temperature or pressure, defined over the spatiotemporal domain $\\{ \\boldsymbol x, t \\} \\in \\Omega \\times [0, \\mathcal{T}]$. Here, $\\dot{\\boldsymbol u}$ denotes the derivative with respect to time and $\\boldsymbol F$ is a nonlinear operator that depends on the current state $\\boldsymbol u$ and its spatial derivatives.\n", "\n", "We focus on a spatial domain $\\Omega$ with non-uniformly and sparsely observed nodes $\\{ \\boldsymbol x_0, \\dots, \\boldsymbol x_{N-1} \\}$ (e.g., on an unstructured mesh). Observations $\\{ \\boldsymbol U(t_0), \\dots, \\boldsymbol U(t_{T-1}) \\}$ are collected at time points $t_0, ... \\dots, t_{T- 1}$, where $\\boldsymbol U(t_i) = \\{ \\boldsymbol u(\\boldsymbol x_0, t_i), \\dots, \\boldsymbol u (\\boldsymbol x_{N-1}, t_i) \\}$ denote the physical quantities. Considering that many physical phenomena involve diffusion processes, we assume the diffusion term in the PDE is known as a priori knowledge. Our goal is to develop a graph learning model with small training datasets capable of accurately predicting various spatiotemporal dynamics on coarse unstructured meshes, handling different types of BCs, and producing the trajectory of dynamics for an arbitrarily given IC." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This case demonstrates how ​PhyMPGN solves the ​cylinder flow problem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The dynamical system of two-dimensional cylinder flow is governed by Navier-Stokes equation\n", "\n", "$$\n", "\\begin{equation}\n", "\\dot{\\boldsymbol{u}} = - \\boldsymbol{u} \\cdot \\nabla \\boldsymbol{u} -\\frac{1}{\\rho} \\nabla p + \\frac{\\mu}{\\rho} \\Delta \\boldsymbol{u} + \\boldsymbol{f}\n", "\\tag{2}\n", "\\end{equation}\n", "$$\n", "\n", "Where the fluid density $\\rho$ is 1,the fluid viscosity $\\mu$ is $5\\times10^{-3}$,and the external force $f$ is 0。The cylinder flow system has an inlet on the left boundary, an outlet on the right boundary, a no-slip boundary condition on the cylinder surface, and symmetric boundary conditions on the top and bottom boundaries. This case study focuses on generalizing the inflow velocity $U_m$ while keeping the fluid density $\\rho$, cylinder diameter $D=2$, and fluid viscosity $\\mu$ constant. Since the Reynolds number is defined as $Re=\\rho U_m D/ \\mu$, generalizing the inflow velocity $U_m$ inherently means generalizing different Reynolds numbers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Architecture" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![PhyMPGN network structure](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/docs/mindflow/docs/source_en/data_mechanism_fusion/images/phympgn.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For Equation (1), a second-order Runge-Kutta (RK2) scheme can be used for discretization:\n", "\n", "$$\n", "\\begin{equation}\n", "\\boldsymbol u^{k+1} = \\boldsymbol u^k + \\frac{1}{2}(\\boldsymbol g_1 + \\boldsymbol g_2); \\quad \\boldsymbol g_1 = \\boldsymbol F(t^k, \\boldsymbol x, \\boldsymbol u^k, \\dots); \\quad \\boldsymbol g_2 = \\boldsymbol F(t^{k+1}, \\boldsymbol x, \\boldsymbol u^k + \\delta t \\boldsymbol g_1, \\dots)\n", "\\end{equation}\n", "$$\n", "\n", "where $\\boldsymbol u^k$ is the state variable at time $t^k$,and $\\delta t$ denotes the time interval between $t^k$ and $t^{k+1}$. According to the Equation (2), we develop a GNN to learn the nonlinear operator $\\boldsymbol F$.\n", "\n", "As shown in Figure, the NN block aims to learn the nonlinear operator $\\boldsymbol F$ and consists of two parts: a GNN block followed the Encode-Process-Decode module and a learnable Laplace block. Due to the universality of diffusion processes in physical phenomena, we design the learnable Laplace block, which encodes the discrete Laplace-Beltrami operator, to learn the increment caused by the diffusion term in the PDE, while the GNN block is responsible to learn the increment induced by other unknown mechanisms or sources." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Make sure the required dependency libraries (such as MindSpore) have been installed\n", "- Ensure the [cylinder flow dataset](https://download-mindspore.osinfra.cn/mindscience/mindflow/dataset/applications/data_mechanism_fusion/PhyMPGN/) has been downloaded\n", "- Verify that the data and model weight storage paths have been properly configured in the [yamls/train.yaml](https://gitee.com/mindspore/mindscience/blob/r0.7/MindFlow/applications/data_mechanism_fusion/phympgn/yamls/train.yaml) configuration file" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Code Execution Steps" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The code execution flow consists of the following steps:\n", "\n", "1. Read configuration file\n", "2. Build dataset\n", "3. Construct model\n", "4. Model training\n", "5. Model inference\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reading Configuration File" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from mindflow.utils import log_config, load_yaml_config, print_log\n", "from easydict import EasyDict\n", "import os.path as osp\n", "from pathlib import Path\n", "\n", "\n", "def load_config(config_file_path, train):\n", " config = load_yaml_config(config_file_path)\n", " config['train'] = train\n", " config = EasyDict(config)\n", " log_dir = './logs'\n", " if train:\n", " log_file = f'phympgn-{config.experiment_name}'\n", " else:\n", " log_file = f'phympgn-{config.experiment_name}-te'\n", " if not osp.exists(osp.join(log_dir, f'{log_file}.log')):\n", " Path(osp.join(log_dir, f'{log_file}.log')).touch()\n", " log_config(log_dir, log_file)\n", " print_log(config)\n", " return config" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "config_file_path = 'yamls/train.yaml'\n", "config = load_config(config_file_path=config_file_path, train=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mindspore as ms\n", "\n", "ms.set_device(device_target='Ascend', device_id=7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Building Dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from src import PDECFDataset, get_data_loader\n", "\n", "\n", "print_log('Train...')\n", "print_log('Loading training data...')\n", "tr_dataset = PDECFDataset(\n", " root=config.path.data_root_dir,\n", " raw_files=config.path.tr_raw_data,\n", " dataset_start=config.data.dataset_start,\n", " dataset_used=config.data.dataset_used,\n", " time_start=config.data.time_start,\n", " time_used=config.data.time_used,\n", " window_size=config.data.tr_window_size,\n", " training=True\n", ")\n", "tr_loader = get_data_loader(\n", " dataset=tr_dataset,\n", " batch_size=config.optim.batch_size\n", ")\n", "\n", "print_log('Loading validation data...')\n", "val_dataset = PDECFDataset(\n", " root=config.path.data_root_dir,\n", " raw_files=config.path.val_raw_data,\n", " dataset_start=config.data.dataset_start,\n", " dataset_used=config.data.dataset_used,\n", " time_start=config.data.time_start,\n", " time_used=config.data.time_used,\n", " window_size=config.data.val_window_size\n", ")\n", "val_loader = get_data_loader(\n", " dataset=val_dataset,\n", " batch_size=config.optim.batch_size\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Constructing Model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from src import PhyMPGN\n", "\n", "print_log('Building model...')\n", "model = PhyMPGN(\n", " encoder_config=config.network.encoder_config,\n", " mpnn_block_config=config.network.mpnn_block_config,\n", " decoder_config=config.network.decoder_config,\n", " laplace_block_config=config.network.laplace_block_config,\n", " integral=config.network.integral\n", ")\n", "print_log(f'Number of parameters: {model.num_params}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model training" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from mindflow import get_multi_step_lr\n", "from mindspore import nn\n", "import numpy as np\n", "\n", "from src import Trainer, TwoStepLoss\n", "\n", "\n", "lr_scheduler = get_multi_step_lr(\n", " lr_init=config.optim.lr,\n", " milestones=list(np.arange(0, config.optim.start_epoch+config.optim.epochs,\n", " step=config.optim.steplr_size)[1:]),\n", " gamma=config.optim.steplr_gamma,\n", " steps_per_epoch=len(tr_loader),\n", " last_epoch=config.optim.start_epoch+config.optim.epochs-1\n", ")\n", "optimizer = nn.AdamWeightDecay(model.trainable_params(), learning_rate=lr_scheduler,\n", " eps=1.0e-8, weight_decay=1.0e-2)\n", "trainer = Trainer(\n", " model=model, optimizer=optimizer, scheduler=lr_scheduler, config=config,\n", " loss_func=TwoStepLoss()\n", ")\n", "trainer.train(tr_loader, val_loader)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Epoch 1/1600] Batch Time: 2.907 (3.011) Data Time: 0.021 (0.035) Graph Time: 0.004 (0.004) Grad Time: 2.863 (2.873) Optim Time: 0.006 (0.022)\n", "\n", "[Epoch 1/1600] Batch Time: 1.766 (1.564) Data Time: 0.022 (0.044) Graph Time: 0.003 (0.004)\n", "\n", "[Epoch 1/1600] tr_loss: 1.36e-02 val_loss: 1.29e-02 [MIN]\n", "\n", "[Epoch 2/1600] Batch Time: 3.578 (3.181) Data Time: 0.024 (0.038) Graph Time: 0.004 (0.004) Grad Time: 3.531 (3.081) Optim Time: 0.004 (0.013)\n", "\n", "[Epoch 2/1600] Batch Time: 1.727 (1.664) Data Time: 0.023 (0.042) Graph Time: 0.003 (0.004)\n", "\n", "[Epoch 2/1600] tr_loss: 1.15e-02 val_loss: 9.55e-03 [MIN]\n", "\n", "...\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Inference" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "config_file_path = 'yamls/train.yaml'\n", "config = load_config(config_file_path=config_file_path, train=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mindspore as ms\n", "\n", "ms.set_device(device_target='Ascend', device_id=7)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from src import PDECFDataset, get_data_loader, Trainer, PhyMPGN\n", "from mindspore import nn\n", "\n", "\n", "# test datasets\n", "te_dataset = PDECFDataset(\n", " root=config.path.data_root_dir,\n", " raw_files=config.path.te_raw_data,\n", " dataset_start=config.data.te_dataset_start,\n", " dataset_used=config.data.te_dataset_used,\n", " time_start=config.data.time_start,\n", " time_used=config.data.time_used,\n", " window_size=config.data.te_window_size,\n", " training=False\n", ")\n", "te_loader = get_data_loader(\n", " dataset=te_dataset,\n", " batch_size=1,\n", " shuffle=False,\n", ")\n", "print_log('Building model...')\n", "model = PhyMPGN(\n", " encoder_config=config.network.encoder_config,\n", " mpnn_block_config=config.network.mpnn_block_config,\n", " decoder_config=config.network.decoder_config,\n", " laplace_block_config=config.network.laplace_block_config,\n", " integral=config.network.integral\n", ")\n", "print_log(f'Number of parameters: {model.num_params}')\n", "trainer = Trainer(\n", " model=model, optimizer=None, scheduler=None, config=config,\n", " loss_func=nn.MSELoss()\n", ")\n", "print_log('Test...')\n", "trainer.test(te_loader)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".[TEST 0/9] MSE at 2000t: 5.06e-04, armse: 0.058, time: 185.3432s\n", "\n", "[TEST 1/9] MSE at 2000t: 4.83e-04, armse: 0.040, time: 186.3979s\n", "\n", "[TEST 2/9] MSE at 2000t: 1.95e-03, armse: 0.062, time: 177.0030s\n", "\n", "...\n", "\n", "[TEST 8/9] MSE at 2000t: 1.42e-01, armse: 0.188, time: 163.1219s\n", "\n", "[TEST 9] Mean Loss: 4.88e-02, Mean armse: 0.137, corre: 0.978, time: 173.3827s" ] } ], "metadata": { "kernelspec": { "display_name": "zbc_ms2.5.0", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.11" } }, "nbformat": 4, "nbformat_minor": 2 }