{ "cells": [ { "cell_type": "markdown", "id": "12326aa7", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "# Reduced order model for three-dimensional unsteady flow\n", "\n", "[![DownloadNotebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/resource/_static/logo_notebook_en.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.5.0/mindflow/en/data_driven/mindspore_flow_around_sphere.ipynb) [![DownloadCode](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/resource/_static/logo_download_code_en.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.5.0/mindflow/en/data_driven/mindspore_flow_around_sphere.py) [![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.5.0/docs/mindflow/docs/source_en/data_driven/flow_around_sphere.ipynb)" ] }, { "cell_type": "markdown", "id": "aa4bc6f5", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Overview" ] }, { "cell_type": "markdown", "id": "c0e74a98", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Three-dimensional complex flows are prevalent in practical engineering problems, and effectively capturing and analyzing the flow field poses significant challenges. The CFD simulation of 3D flow fields involving complex geometries demands substantial computing resources due to the increased mesh degrees of freedom. Consequently, this limitation hinders the progress of downstream tasks like interactive design and real-time optimal control.\n", "\n", "While there has been extensive exploration of deep learning technology for flow field modeling in recent years, most of the focus has remained on two-dimensional shapes. As a result, there is still a noticeable gap when applying these models to real-world engineering scenarios. The stronger spatial coupling effect in 3D data compared to 2D data is a primary reason for this disparity. Additionally, training neural networks with a large number of model parameters requires robust computing power and ample storage resources.\n", "\n", "For 3D unsteady flow, the reduced-order model based on the fully convolutional neural network called \"**ResUnet3D**\" can quickly establish the nonlinear mapping between snapshots of the 3D flow field, offering a promising approach to tackle these challenges." ] }, { "cell_type": "markdown", "id": "2b438c6c", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Problem Description" ] }, { "cell_type": "markdown", "id": "9e622cca", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Acquiring high-precision three-dimensional flow field data poses challenges for the industry. However, this model takes a different approach by not relying on a large number of historical flow field snapshots for support.Instead, it directly extracts the potential characteristics of the flow field from the current snapshot $F^t$ at a single moment. Subsequently, two strategies for feature decoding are proposed:" ] }, { "cell_type": "markdown", "id": "11910754", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "1. Low-dimensional multi-scale features can be decoded into incremental flow fields $\\Delta {F}^t$ amplified by a certain factor $scale$, enabling long-term predictions of the three-dimensional non-stationary field through an iterative strategy:" ] }, { "cell_type": "markdown", "id": "46d6df68", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "$$scale\\cdot \\Delta {F^t}=f\\left( F^t \\right)$$" ] }, { "cell_type": "markdown", "id": "95989fd0", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "$$F^{t+1}=F^{t}+\\Delta {F^t}$$" ] }, { "cell_type": "markdown", "id": "b7ff5fbb", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "2. The other is to directly decode it into the flow field of the future moment, but the data often needs to be normalized. Similarly, long-term predictions need to be achieved through iteration:" ] }, { "cell_type": "markdown", "id": "27ce2526", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "$$F^{t+1}=f\\left( F^t \\right)$$" ] }, { "cell_type": "markdown", "id": "6aaa2bb4", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Technology Path" ] }, { "cell_type": "markdown", "id": "ffdfc354", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "The main technical approach for the above-mentioned problem mainly includes the following steps:\n", "\n", "1. Dataset loading;\n", "\n", "2. Model construction;\n", "\n", "3. Optimizer and loss function;\n", "\n", "4. Model training;\n", "\n", "5. Model inference;\n", "\n", "6. Result visualization." ] }, { "cell_type": "markdown", "id": "b60001e8", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Model structure" ] }, { "cell_type": "markdown", "id": "f5ccbebb", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "The proposed neural network model follows the paradigm of an encoder-decoder architecture, which exhibits a symmetrical U-shaped structure. The main difference lies in the replacement of traditional convolutions with convolutional residual blocks." ] }, { "cell_type": "markdown", "id": "66076fd9", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "![ResUnet3D.jpg](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/docs/mindflow/docs/source_en/data_driven/images/ResUnet3D.jpg)" ] }, { "cell_type": "markdown", "id": "a1ef72cd", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "- **Encoder**: The left side of the network, known as the contracting path, is responsible for hierarchically extracting the latent features of the high-dimensional flow field. The encoder consists of four downsampled residual blocks, as illustrated in Fig(a). Downsampling is accomplished by utilizing convolutional operations with a stride of 2 instead of pooling operations. Following each residual block operation, the number of feature channels is doubled while the size of the feature map is halved." ] }, { "cell_type": "markdown", "id": "346bcb3c", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "- **Decoder**: On the right side of the network, referred to as the expansive path, the low-dimensional features are upsampled. Correspondingly, the decoding part also includes four upsampling residual blocks, with the structure of the upsampling residual block shown in Fig(b). The first step involves the application of deconvolution to increase the size of the original features by a factor of two while reducing the number of feature channels. It should be noted that the upsampling output block (c) responsible for the final output of the model discards the identity connection part in the upsampling residual block." ] }, { "cell_type": "markdown", "id": "ec103905", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "- **Residual Connect**:  In addition to the residual connections within the residual blocks, we also introduced skip connections in our model, indicated by solid gray arrows in . The increased number of residual connections helps in capturing low-frequency features of the high-dimensional flow field, further enriching the details of the flow field prediction." ] }, { "cell_type": "markdown", "id": "9107c710", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "![blocks.jpg](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/docs/mindflow/docs/source_en/data_driven/images/blocks.jpg)\n" ] }, { "cell_type": "markdown", "id": "558ee4ab", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Dataset" ] }, { "cell_type": "markdown", "id": "0641feb8", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "The dataset is a multidimensional array of flow snapshots of three-dimensional flows around sphere:\n", "\n", "- The viscous flow around a three-dimensional sphere is a fundamental problem in fluid mechanics, particularly in engineering applications. For this research, a Reynolds number of Re=300 is chosen. At this value, the sphere's wake will periodically shed hairpin-shaped vortices, resulting in pronounced unsteady characteristics.\n", "\n", "- The flow configuration and calculation details for flow around a sphere are given in the [paper](https://arxiv.org/abs/2307.07323). The final flow data set obtained Cartesian uniform interpolation method in this paper by is denoted as $F\\in {{\\mathbb{R}}^{T\\times C\\times H\\times W\\times D}}$ in 6D×6D×6D 3d space, where $T=400$ represents the number of snapshots, $C=4$ represents the number of channels, representing pressure, streamwise velocity, normal velocity, and spanwise velocity information respectively. Additionally, $H=128$, $W=64$, and $D=64$ correspond to the height, width, and depth of the snapshots, respectively.\n", "\n", "- [download link](https://download.mindspore.cn/mindscience/mindflow/dataset/applications/data_driven/3d_unsteady_flow)" ] }, { "cell_type": "code", "execution_count": 1, "id": "0580e5aa", "metadata": { "pycharm": { "name": "#%%\n" }, "scrolled": false }, "outputs": [], "source": [ "import os\n", "import time\n", "import argparse\n", "\n", "import numpy as np\n", "\n", "import mindspore\n", "from mindspore import nn, context, ops, jit, set_seed\n", "from mindspore import Tensor\n", "from mindspore.amp import DynamicLossScaler, auto_mixed_precision, all_finite\n", "\n", "from mindflow.utils import load_yaml_config\n", "from mindflow.common import get_warmup_cosine_annealing_lr\n", "\n", "from src import ResUnet3D, create_dataset, UnsteadyFlow3D, check_file_path, calculate_metric" ] }, { "cell_type": "code", "execution_count": 2, "id": "02e3950e", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "set_seed(123456)\n", "np.random.seed(123456)" ] }, { "cell_type": "markdown", "id": "799aef11", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Training environment" ] }, { "cell_type": "markdown", "id": "559b9b8b", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "- The static GRAPH of MindSpore framework is adopted for training.\n", "\n", "- Training can be done on GPU (default) or Ascend (single card)." ] }, { "cell_type": "code", "execution_count": 3, "id": "3e732d9c", "metadata": { "pycharm": { "name": "#%%\n" }, "scrolled": true }, "outputs": [], "source": [ "def parse_args():\n", " \"\"\"Parse input args\"\"\"\n", " parser = argparse.ArgumentParser(description='model train for 3d unsteady flow')\n", " parser.add_argument(\"--mode\", type=str, default=\"GRAPH\", choices=[\"GRAPH\", \"PYNATIVE\"],\n", " help=\"Context mode, support 'GRAPH', 'PYNATIVE'\")\n", " parser.add_argument(\"--save_graphs\", type=bool, default=False, choices=[True, False],\n", " help=\"Whether to save intermediate compilation graphs\")\n", " parser.add_argument(\"--save_graphs_path\", type=str, default=\"./graphs\")\n", " parser.add_argument(\"--device_target\", type=str, default=\"GPU\", choices=[\"GPU\", \"Ascend\"],\n", " help=\"The target device to run, support 'Ascend', 'GPU'\")\n", " parser.add_argument(\"--device_id\", type=int, default=0, help=\"ID of the target device\")\n", " parser.add_argument(\"--config_file_path\", type=str, default=\"./config.yaml\")\n", " parser.add_argument(\"--norm\", type=bool, default=False, choices=[True, False],\n", " help=\"Whether to perform data normalization on original data\")\n", " parser.add_argument(\"--residual_mode\", type=bool, default=True, choices=[True, False],\n", " help=\"Whether to use indirect prediction mode\")\n", " parser.add_argument(\"--scale\", type=float, default=1000.0,\n", " help=\"Whether to use indirect prediction mode\")\n", " input_args = parser.parse_args()\n", " return input_args\n", "\n", "args = parse_args()\n", "context.set_context(mode=context.GRAPH_MODE if args.mode.upper().startswith(\"GRAPH\") else context.PYNATIVE_MODE,\n", " save_graphs=args.save_graphs,\n", " save_graphs_path=args.save_graphs_path,\n", " device_target=args.device_target,\n", " device_id=args.device_id)\n", "use_ascend = context.get_context(attr_key='device_target') == \"Ascend\"" ] }, { "cell_type": "markdown", "id": "533ab74d", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Hyperparameter configuration" ] }, { "cell_type": "markdown", "id": "96a4e27b", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Read several types of parameters from the configuration file, which are related to data, model, optimizer, and summary." ] }, { "cell_type": "code", "execution_count": 4, "id": "058a3e5d", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "config = load_yaml_config(args.config_file_path)\n", "data_params = config[\"data\"]\n", "model_params = config[\"model\"]\n", "optimizer_params = config[\"optimizer\"]\n", "summary_params = config[\"summary\"]" ] }, { "cell_type": "markdown", "id": "79620b34", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Dataset loading" ] }, { "cell_type": "markdown", "id": "85f36c46", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "- The downloaded data is the original flow data (original_data_0.npy), where the index can be used to distinguish different flow states.\n", "\n", "- The first run of the program will generate training (train_data_0.npy), validation (eval_data_0.npy) and inference (infer_data_0) datasets according to the configuration requirements of regularization, division ratio, etc.\n", "\n", "- The size of the training dataset is (T, C, D, H, W) -> (300, 4, 64, 128, 64), and then it will be converted to MindSpore's dedicated DatasetGenerator." ] }, { "cell_type": "code", "execution_count": 5, "id": "36154624", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# data for training\n", "train_loader = create_dataset(data_params, is_train=True, norm=args.norm, residual=args.residual_mode, scale=args.scale)\n", "train_dataset = train_loader.batch(model_params['batch_size'], drop_remainder=True)\n", "# data for evaluating\n", "eval_loader = create_dataset(data_params, is_eval=True, norm=args.norm, residual=args.residual_mode, scale=args.scale)\n", "eval_dataset = eval_loader.batch(1, drop_remainder=False)" ] }, { "cell_type": "markdown", "id": "5d2ab5c5", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Model construction" ] }, { "cell_type": "markdown", "id": "3d3af2ce", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Build a suitable ResUnet3D model by configuring the number of input channels (in.dims), the number of output channels (out.dims), the number of hidden channels in the first layer (base), and the initialization method." ] }, { "cell_type": "code", "execution_count": 6, "id": "d18b92e0", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "model = ResUnet3D(in_channels=model_params['in_dims'], base=model_params['base'], out_channels=model_params['out_dims'])" ] }, { "cell_type": "markdown", "id": "9591e72f", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Loss functions and optimizers" ] }, { "cell_type": "markdown", "id": "472b9ca7", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "To suppress error accumulation during inference, we add a gradient loss term with weak physical interpretability to the original strength loss function, where only the first derivative is computed." ] }, { "cell_type": "code", "execution_count": 7, "id": "3d850094", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "if use_ascend:\n", " loss_scaler = DynamicLossScaler(1024, 2, 100)\n", " auto_mixed_precision(model, 'O1')\n", "else:\n", " loss_scaler = None\n", "\n", "# prepare optimizer and loss function\n", "steps_per_epoch = train_dataset.get_dataset_size()\n", "lr = get_warmup_cosine_annealing_lr(lr_init=optimizer_params['initial_lr'],\n", " last_epoch=optimizer_params['train_epochs'],\n", " steps_per_epoch=steps_per_epoch,\n", " warmup_epochs=optimizer_params['warmup_epochs'])\n", "optimizer = nn.Adam(params=model.trainable_params(), learning_rate=Tensor(lr))\n", "\n", "problem = UnsteadyFlow3D(network=model, loss_fn=model_params['loss_fn'], metric_fn=model_params['metric_fn'],\n", " loss_weight=model_params['loss_weight'], dynamic_flag=model_params['dynamic_flag'],\n", " t_in=data_params['t_in'], t_out=data_params['t_out'],\n", " residual=args.residual_mode, scale=args.scale)" ] }, { "cell_type": "markdown", "id": "d4a2c280", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Train function and data sink" ] }, { "cell_type": "markdown", "id": "decf9b01", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "With MindSpore>= 2.0.0, you can train neural networks using functional programming paradigms, and single-step training functions are decorated with jit. The data_sink function is used to transfer the step-by-step training function and training dataset." ] }, { "cell_type": "code", "execution_count": 8, "id": "2ad04b7f", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "def forward_fn(train_inputs, train_label):\n", " loss = problem.get_loss(train_inputs, train_label)\n", " if use_ascend:\n", " loss = loss_scaler.scale(loss)\n", " return loss\n", "\n", "grad_fn = ops.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=False)\n", "\n", "@jit\n", "def train_step(train_inputs, train_label):\n", " loss, grads = grad_fn(train_inputs, train_label)\n", " if use_ascend:\n", " loss = loss_scaler.unscale(loss)\n", " is_finite = all_finite(grads)\n", " if is_finite:\n", " grads = loss_scaler.unscale(grads)\n", " loss = ops.depend(loss, optimizer(grads))\n", " loss_scaler.adjust(is_finite)\n", " else:\n", " loss = ops.depend(loss, optimizer(grads))\n", " return loss" ] }, { "cell_type": "code", "execution_count": 9, "id": "205ccec0", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "sink_process = mindspore.data_sink(train_step, train_dataset, sink_size=1)\n", "\n", "summary_dir = os.path.join(summary_params['summary_dir'], f\"norm-{args.norm}\",\n", " f\"resi-{args.residual_mode} scale-{args.scale} {model_params['loss_fn']}\")\n", "ckpt_dir = os.path.join(summary_dir, \"ckpt\")\n", "check_file_path(ckpt_dir)" ] }, { "cell_type": "markdown", "id": "f7e3bc04", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Model training" ] }, { "cell_type": "code", "execution_count": 10, "id": "7bc1ffac", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "pid: 3184621\n", "Running in GRAPH mode within GPU device, using device id: 0.\n", "epoch: 1 loss: 4.42149210 epoch time: 93.07s\n", "epoch: 2 loss: 3.66354370 epoch time: 80.61s\n", "epoch: 3 loss: 3.45540905 epoch time: 80.62s\n", "epoch: 4 loss: 3.41599178 epoch time: 80.57s\n", "epoch: 5 loss: 3.40474415 epoch time: 80.62s\n", "epoch: 6 loss: 3.39673615 epoch time: 80.59s\n", "epoch: 7 loss: 3.39119720 epoch time: 80.59s\n", "epoch: 8 loss: 3.37303853 epoch time: 80.82s\n", "epoch: 9 loss: 3.31753325 epoch time: 80.71s\n", "epoch: 10 loss: 3.14250851 epoch time: 80.70s\n", "================================Start Evaluation================================\n", "mean metric: 1.36825517 eval total time:6.76\n", "=================================End Evaluation=================================\n", "epoch: 11 loss: 2.76249218 epoch time: 82.83s\n", "epoch: 12 loss: 2.37564182 epoch time: 81.61s\n", "epoch: 13 loss: 2.13626671 epoch time: 81.59s\n", "epoch: 14 loss: 2.00457954 epoch time: 81.75s\n", "epoch: 15 loss: 1.85440254 epoch time: 82.10s\n", "epoch: 16 loss: 1.85113728 epoch time: 80.90s\n", "epoch: 17 loss: 1.90822351 epoch time: 80.51s\n", "epoch: 18 loss: 1.78560519 epoch time: 80.52s\n", "epoch: 19 loss: 1.86209464 epoch time: 80.57s\n", "epoch: 20 loss: 1.79454994 epoch time: 80.61s\n", "================================Start Evaluation================================\n", "mean metric: 0.44466619 eval total time:5.47\n", "=================================End Evaluation=================================\n", "Start-to-End total training time: 1646.58s\n" ] } ], "source": [ "print(\"pid:\", os.getpid())\n", "print(f\"Running in {args.mode.upper()} mode within {args.device_target} device, using device id: {args.device_id}.\")\n", "start_time = time.time()\n", "\n", "for cur_epoch in range(1, optimizer_params['train_epochs'] + 1):\n", " local_time_beg = time.time()\n", " model.set_train(True)\n", " for _ in range(steps_per_epoch):\n", " cur_loss = sink_process()\n", "\n", " epoch_time = time.time() - local_time_beg\n", " print(f\"epoch: {cur_epoch:-4d} loss: {cur_loss.asnumpy():.8f} epoch time: {epoch_time:.2f}s\", flush=True)\n", "\n", " if cur_epoch % summary_params['eval_interval'] == 0:\n", " model.set_train(False)\n", " # A uniform metric than total loss is unified as the evaluation standard\n", " calculate_metric(problem, eval_dataset)\n", " mindspore.save_checkpoint(model, os.path.join(ckpt_dir, f'ckpt-{cur_epoch}'))\n", "\n", "print(f\"Start-to-End total training time: {(time.time() - start_time):.2f}s\")" ] }, { "cell_type": "markdown", "id": "6a86cb87", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Model inference" ] }, { "cell_type": "markdown", "id": "301ed049", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "After completing the model training, run [eval.py](https://gitee.com/mindspore/mindscience/blob/r0.7/MindFlow/applications/data_driven/flow_around_sphere/eval.py) to control the read model path through both the console and configuration files. This will enable you to efficiently and accurately infer the long-term 3D flow field in the future, based on the initial flow field at any given moment." ] }, { "cell_type": "markdown", "id": "732c6cd0", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Result visualization" ] }, { "cell_type": "markdown", "id": "37bdc205", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "- The changes in the pressure contour map of the `Z=0` section during trained indirect model inference are as follows:" ] }, { "cell_type": "markdown", "id": "0ff1bac7", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "![P.gif](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/docs/mindflow/docs/source_en/data_driven/images/P.gif)" ] }, { "cell_type": "markdown", "id": "aa6cc050", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "The periodic flow characteristics of different physical quantities are quickly and faithfully predicted by the reduced-order model." ] }, { "cell_type": "markdown", "id": "0d9fb445", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "- A 3D vorticity isosurface ($Q=0.0005$) plot colored according to flow velocity after two cycles is shown below:" ] }, { "cell_type": "markdown", "id": "cf23f023", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "![Q.png](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.5.0/docs/mindflow/docs/source_en/data_driven/images/Q.png)" ] }, { "cell_type": "markdown", "id": "d887f316", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "The hairpin-like shape of the vortex is basically the same, but the result predicted by ResUnet3D is obviously rough." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.16" } }, "nbformat": 4, "nbformat_minor": 5 }