[{"data":1,"prerenderedAt":2355},["ShallowReactive",2],{"content-query-HgaS64OhKB":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":2349,"_id":2350,"_source":2351,"_file":2352,"_stem":2353,"_extension":2354},"/technology-blogs/en/2765","en",false,"","MindSpore Case Study | AnimeGAN2 for Animation Style Transfer","This case provides a comprehensive explanation of the AnimeGAN model, including a detailed walkthrough of its algorithms and an analysis of its strengths and weaknesses in animation style transfer.","2022-12-14","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/85d6a3cf3c474382a134a2df9b67cf43.png","technology-blogs",{"type":14,"children":15,"toc":2346},"root",[16,24,34,39,47,60,68,73,78,83,88,93,101,106,114,119,126,131,138,143,155,163,168,173,178,183,188,193,198,203,208,213,220,228,233,238,243,248,253,258,263,268,273,278,283,288,293,298,303,308,313,318,323,328,333,338,343,348,353,360,368,373,381,386,399,404,409,414,419,424,429,434,439,444,449,454,459,464,469,474,478,483,488,492,496,501,506,510,515,520,525,530,535,540,545,550,555,560,568,573,577,582,587,592,597,602,607,612,617,622,627,632,637,642,647,652,657,661,666,671,675,680,684,688,693,697,702,706,711,715,720,724,728,733,737,742,750,755,772,777,782,787,792,797,802,807,812,817,822,827,832,837,842,847,852,857,862,867,872,877,882,887,892,897,902,907,912,917,922,927,932,936,941,946,951,956,961,966,971,976,981,986,991,996,1001,1006,1011,1016,1021,1026,1031,1036,1040,1048,1053,1058,1062,1067,1071,1075,1079,1084,1088,1092,1096,1100,1104,1108,1112,1116,1120,1124,1129,1134,1138,1142,1147,1152,1157,1162,1167,1172,1177,1182,1187,1192,1197,1201,1206,1210,1214,1218,1223,1227,1232,1237,1241,1246,1250,1255,1259,1263,1267,1275,1287,1292,1297,1302,1307,1312,1317,1321,1326,1331,1336,1344,1349,1353,1357,1362,1366,1370,1375,1380,1385,1390,1395,1400,1405,1410,1415,1419,1423,1428,1433,1438,1443,1447,1452,1457,1462,1467,1472,1477,1482,1486,1490,1495,1499,1504,1509,1514,1519,1524,1529,1534,1539,1544,1549,1554,1559,1563,1568,1573,1578,1583,1588,1593,1598,1603,1608,1613,1618,1623,1628,1633,1637,1641,1645,1650,1655,1660,1665,1670,1675,1680,1685,1690,1695,1700,1705,1710,1715,1720,1725,1730,1735,1740,1745,1750,1755,1760,1765,1770,1775,1780,1785,1790,1795,1800,1805,1810,1815,1822,1830,1835,1839,1843,1847,1851,1855,1860,1865,1869,1873,1878,1882,1887,1891,1895,1900,1905,1910,1914,1919,1924,1929,1934,1938,1943,1948,1953,1958,1963,1968,1973,1978,1982,1987,1992,1997,2002,2007,2012,2017,2024,2029,2036,2044,2049,2053,2057,2061,2065,2069,2073,2077,2081,2086,2091,2096,2101,2106,2110,2115,2120,2125,2130,2135,2139,2143,2147,2152,2157,2162,2167,2172,2177,2181,2185,2189,2194,2199,2204,2209,2214,2219,2224,2229,2234,2239,2244,2249,2254,2259,2263,2268,2273,2278,2283,2288,2293,2298,2303,2311,2318,2326,2331],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"mindspore-case-study-animegan2-for-animation-style-transfer",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":17,"tag":29,"props":30,"children":31},"strong",{},[32],{"type":23,"value":33},"Author: Zhang Tengfei | School: Tianjin University",{"type":17,"tag":25,"props":35,"children":36},{},[37],{"type":23,"value":38},"Animation is a common art form in our daily life, widely used in advertising, movies, and kids education, among other fields. At present, animation production primarily depends on manual implementation, which is labor-intensive and requires highly specialized artistic skills. For animation artists, creating high-quality animation works requires careful consideration of lines, textures, colors, and shadows, making the whole process both challenging and time-consuming. Therefore, the automation technology capable of transforming real-life photos into high-quality animation-style images holds significant value. It not only enables artists to concentrate more on their creative work, but also simplifies the process for regular individuals to create their own animation works. This case provides a comprehensive explanation of the AnimeGAN model, including a detailed walkthrough of its algorithms and an analysis of its strengths and weaknesses in animation style transfer.",{"type":17,"tag":25,"props":40,"children":41},{},[42],{"type":17,"tag":29,"props":43,"children":44},{},[45],{"type":23,"value":46},"Model Overview",{"type":17,"tag":25,"props":48,"children":49},{},[50,52,58],{"type":23,"value":51},"AnimeGAN is a study from Wuhan University and Hubei University of Technology. It combines neural style transfer with a generative adversarial network (GAN) to animate real-life images. This model was proposed in paper ",{"type":17,"tag":53,"props":54,"children":55},"em",{},[56],{"type":23,"value":57},"AnimeGAN: A Novel Lightweight GAN for Photo Animation",{"type":23,"value":59},". The generator is a symmetric encoding and decoding structure that comprises standard convolutions, depthwise separable convolutions, inverted residual blocks (IRBs), and upsampling and downsampling modules. The discriminator consists of standard convolutions.",{"type":17,"tag":25,"props":61,"children":62},{},[63],{"type":17,"tag":29,"props":64,"children":65},{},[66],{"type":23,"value":67},"Network Features",{"type":17,"tag":25,"props":69,"children":70},{},[71],{"type":23,"value":72},"AnimeGAN has the following improvements:",{"type":17,"tag":25,"props":74,"children":75},{},[76],{"type":23,"value":77},"1. The problem of high-frequency artifacts in generated images is solved.",{"type":17,"tag":25,"props":79,"children":80},{},[81],{"type":23,"value":82},"2. The model is easy to train and can achieve the effect described in the paper.",{"type":17,"tag":25,"props":84,"children":85},{},[86],{"type":23,"value":87},"3. The number of parameters of the generator network is further reduced (generator size now: 8.07 MB).",{"type":17,"tag":25,"props":89,"children":90},{},[91],{"type":23,"value":92},"4. It uses high-quality style data from BD movies as much as possible.",{"type":17,"tag":25,"props":94,"children":95},{},[96],{"type":17,"tag":29,"props":97,"children":98},{},[99],{"type":23,"value":100},"Data Preparation",{"type":17,"tag":25,"props":102,"children":103},{},[104],{"type":23,"value":105},"The dataset contains 6656 real landscape images and three animation styles: Hayao, Shinkai, Paprika. Each animation style is generated by randomly cropping video frames from the corresponding movie. In addition, the dataset also includes various sizes of images for testing purposes. The following figure shows the dataset information.",{"type":17,"tag":25,"props":107,"children":108},{},[109],{"type":17,"tag":110,"props":111,"children":113},"img",{"alt":7,"src":112},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/a38151b8f57f450e948a29aedc2ba606.png",[],{"type":17,"tag":25,"props":115,"children":116},{},[117],{"type":23,"value":118},"The following shows some images in the dataset.",{"type":17,"tag":25,"props":120,"children":121},{},[122],{"type":17,"tag":110,"props":123,"children":125},{"alt":7,"src":124},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/c26047e7f0124cf58822ea0a93efb509.png",[],{"type":17,"tag":25,"props":127,"children":128},{},[129],{"type":23,"value":130},"After the dataset is downloaded and decompressed, its directory structure is as follows:",{"type":17,"tag":25,"props":132,"children":133},{},[134],{"type":17,"tag":110,"props":135,"children":137},{"alt":7,"src":136},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/abb69cae3a9441cdb9be1993117864a1.png",[],{"type":17,"tag":25,"props":139,"children":140},{},[141],{"type":23,"value":142},"This model uses the VGG 19 network for image feature extraction and loss function calculation, so we need to load parameters of the pre-trained network.",{"type":17,"tag":25,"props":144,"children":145},{},[146,148,153],{"type":23,"value":147},"After downloading the pre-trained VGG 19 network, place the ",{"type":17,"tag":29,"props":149,"children":150},{},[151],{"type":23,"value":152},"vgg.ckpt",{"type":23,"value":154}," file in the same directory as this file.",{"type":17,"tag":25,"props":156,"children":157},{},[158],{"type":17,"tag":29,"props":159,"children":160},{},[161],{"type":23,"value":162},"Data Preprocessing",{"type":17,"tag":25,"props":164,"children":165},{},[166],{"type":23,"value":167},"Animation images with smooth edges are required for loss function calculation. The dataset mentioned above already contains such images. To create an animation dataset by yourself, you can use the following code to generate the required animation images with smooth edges:",{"type":17,"tag":25,"props":169,"children":170},{},[171],{"type":23,"value":172},"from src.animeganv2_utils.edge_smooth import make_edge_smooth",{"type":17,"tag":25,"props":174,"children":175},{},[176],{"type":23,"value":177},"# Animation image directory",{"type":17,"tag":25,"props":179,"children":180},{},[181],{"type":23,"value":182},"style_dir = './dataset/Sakura/style'",{"type":17,"tag":25,"props":184,"children":185},{},[186],{"type":23,"value":187},"# Output image directory",{"type":17,"tag":25,"props":189,"children":190},{},[191],{"type":23,"value":192},"output_dir = './dataset/Sakura/smooth'",{"type":17,"tag":25,"props":194,"children":195},{},[196],{"type":23,"value":197},"# Size of each output image",{"type":17,"tag":25,"props":199,"children":200},{},[201],{"type":23,"value":202},"size = 256",{"type":17,"tag":25,"props":204,"children":205},{},[206],{"type":23,"value":207},"# Smooth image. The output result is stored in the smooth folder.",{"type":17,"tag":25,"props":209,"children":210},{},[211],{"type":23,"value":212},"make_edge_smooth(style_dir, output_dir, size)",{"type":17,"tag":25,"props":214,"children":215},{},[216],{"type":17,"tag":110,"props":217,"children":219},{"alt":7,"src":218},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/1bfa870f9f11488288e8d5ce0ab5cddd.png",[],{"type":17,"tag":25,"props":221,"children":222},{},[223],{"type":17,"tag":29,"props":224,"children":225},{},[226],{"type":23,"value":227},"Training Dataset Visualization",{"type":17,"tag":25,"props":229,"children":230},{},[231],{"type":23,"value":232},"import argparse",{"type":17,"tag":25,"props":234,"children":235},{},[236],{"type":23,"value":237},"import matplotlib.pyplot as plt",{"type":17,"tag":25,"props":239,"children":240},{},[241],{"type":23,"value":242},"from src.process_datasets.animeganv2_dataset import AnimeGANDataset",{"type":17,"tag":25,"props":244,"children":245},{},[246],{"type":23,"value":247},"import numpy as np",{"type":17,"tag":25,"props":249,"children":250},{},[251],{"type":23,"value":252},"# Load parameters.",{"type":17,"tag":25,"props":254,"children":255},{},[256],{"type":23,"value":257},"parser = argparse.ArgumentParser()",{"type":17,"tag":25,"props":259,"children":260},{},[261],{"type":23,"value":262},"parser.add_argument('--dataset', default='Hayao', choices=['Hayao', 'Shinkai', 'Paprika'], type=str)",{"type":17,"tag":25,"props":264,"children":265},{},[266],{"type":23,"value":267},"parser.add_argument('--data_dir', default='./dataset', type=str)",{"type":17,"tag":25,"props":269,"children":270},{},[271],{"type":23,"value":272},"parser.add_argument('--batch_size', default=4, type=int)",{"type":17,"tag":25,"props":274,"children":275},{},[276],{"type":23,"value":277},"parser.add_argument('--debug_samples', default=0, type=int)",{"type":17,"tag":25,"props":279,"children":280},{},[281],{"type":23,"value":282},"parser.add_argument('--num_parallel_workers', default=1, type=int)",{"type":17,"tag":25,"props":284,"children":285},{},[286],{"type":23,"value":287},"args = parser.parse_args(args=[])",{"type":17,"tag":25,"props":289,"children":290},{},[291],{"type":23,"value":292},"plt.figure()",{"type":17,"tag":25,"props":294,"children":295},{},[296],{"type":23,"value":297},"# Load the dataset.",{"type":17,"tag":25,"props":299,"children":300},{},[301],{"type":23,"value":302},"data = AnimeGANDataset(args)",{"type":17,"tag":25,"props":304,"children":305},{},[306],{"type":23,"value":307},"data = data.run()",{"type":17,"tag":25,"props":309,"children":310},{},[311],{"type":23,"value":312},"iter = next(data.create_tuple_iterator())",{"type":17,"tag":25,"props":314,"children":315},{},[316],{"type":23,"value":317},"# Perform cyclic processing.",{"type":17,"tag":25,"props":319,"children":320},{},[321],{"type":23,"value":322},"for i in range(1, 5):",{"type":17,"tag":25,"props":324,"children":325},{},[326],{"type":23,"value":327},"plt.subplot(1, 4, i)",{"type":17,"tag":25,"props":329,"children":330},{},[331],{"type":23,"value":332},"temp = np.clip(iter[i - 1][0].asnumpy().transpose(2, 1, 0), 0, 1)",{"type":17,"tag":25,"props":334,"children":335},{},[336],{"type":23,"value":337},"plt.imshow(temp)",{"type":17,"tag":25,"props":339,"children":340},{},[341],{"type":23,"value":342},"plt.axis(\"off\")",{"type":17,"tag":25,"props":344,"children":345},{},[346],{"type":23,"value":347},"Mean(B, G, R) of Hayao are [-4.4346958 -8.66591597 13.10061177]",{"type":17,"tag":25,"props":349,"children":350},{},[351],{"type":23,"value":352},"Dataset: real 6656 style 1792, smooth 1792",{"type":17,"tag":25,"props":354,"children":355},{},[356],{"type":17,"tag":110,"props":357,"children":359},{"alt":7,"src":358},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/c773ab52c9f04d2b8921dc4e099cbbc4.png",[],{"type":17,"tag":25,"props":361,"children":362},{},[363],{"type":17,"tag":29,"props":364,"children":365},{},[366],{"type":23,"value":367},"Network Building",{"type":17,"tag":25,"props":369,"children":370},{},[371],{"type":23,"value":372},"After data processing, let's build the network. According to the AnimeGAN paper, all model weights should be randomly initialized according to a normal distribution with mean of 0 and sigma of 0.02.",{"type":17,"tag":25,"props":374,"children":375},{},[376],{"type":17,"tag":29,"props":377,"children":378},{},[379],{"type":23,"value":380},"Generator",{"type":17,"tag":25,"props":382,"children":383},{},[384],{"type":23,"value":385},"The function of generator G is to transform real-life photos into animation-style images. In practice, this is implemented by the convolution, depthwise separable convolution, IRBs, and upsampling and downsampling modules. The network architecture is as follows.",{"type":17,"tag":25,"props":387,"children":388},{},[389,393,395],{"type":17,"tag":110,"props":390,"children":392},{"alt":7,"src":391},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/903130577e1b450bbc218ca11a0345fe.png",[],{"type":23,"value":394}," ",{"type":17,"tag":110,"props":396,"children":398},{"alt":7,"src":397},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/08e5a6a2f4ca4a86b55f7f9ff83a940d.png",[],{"type":17,"tag":25,"props":400,"children":401},{},[402],{"type":23,"value":403},"import os",{"type":17,"tag":25,"props":405,"children":406},{},[407],{"type":23,"value":408},"import mindspore.nn as nn",{"type":17,"tag":25,"props":410,"children":411},{},[412],{"type":23,"value":413},"from src.models.upsample import UpSample",{"type":17,"tag":25,"props":415,"children":416},{},[417],{"type":23,"value":418},"from src.models.conv2d_block import ConvBlock",{"type":17,"tag":25,"props":420,"children":421},{},[422],{"type":23,"value":423},"from src.models.inverted_residual_block import InvertedResBlock",{"type":17,"tag":25,"props":425,"children":426},{},[427],{"type":23,"value":428},"class Generator(nn.Cell):",{"type":17,"tag":25,"props":430,"children":431},{},[432],{"type":23,"value":433},"\"\"\"AnimeGAN network generator\"\"\"",{"type":17,"tag":25,"props":435,"children":436},{},[437],{"type":23,"value":438},"def __init__(self):",{"type":17,"tag":25,"props":440,"children":441},{},[442],{"type":23,"value":443},"super(Generator, self).__init__()",{"type":17,"tag":25,"props":445,"children":446},{},[447],{"type":23,"value":448},"has_bias = False",{"type":17,"tag":25,"props":450,"children":451},{},[452],{"type":23,"value":453},"self.generator = nn.SequentialCell()",{"type":17,"tag":25,"props":455,"children":456},{},[457],{"type":23,"value":458},"self.generator.append(ConvBlock(3, 32, kernel_size=7))",{"type":17,"tag":25,"props":460,"children":461},{},[462],{"type":23,"value":463},"self.generator.append(ConvBlock(32, 64, stride=2))",{"type":17,"tag":25,"props":465,"children":466},{},[467],{"type":23,"value":468},"self.generator.append(ConvBlock(64, 128, stride=2))",{"type":17,"tag":25,"props":470,"children":471},{},[472],{"type":23,"value":473},"self.generator.append(ConvBlock(128, 128))",{"type":17,"tag":25,"props":475,"children":476},{},[477],{"type":23,"value":473},{"type":17,"tag":25,"props":479,"children":480},{},[481],{"type":23,"value":482},"self.generator.append(InvertedResBlock(128, 256))",{"type":17,"tag":25,"props":484,"children":485},{},[486],{"type":23,"value":487},"self.generator.append(InvertedResBlock(256, 256))",{"type":17,"tag":25,"props":489,"children":490},{},[491],{"type":23,"value":487},{"type":17,"tag":25,"props":493,"children":494},{},[495],{"type":23,"value":487},{"type":17,"tag":25,"props":497,"children":498},{},[499],{"type":23,"value":500},"self.generator.append(ConvBlock(256, 128))",{"type":17,"tag":25,"props":502,"children":503},{},[504],{"type":23,"value":505},"self.generator.append(UpSample(128, 128))",{"type":17,"tag":25,"props":507,"children":508},{},[509],{"type":23,"value":473},{"type":17,"tag":25,"props":511,"children":512},{},[513],{"type":23,"value":514},"self.generator.append(UpSample(128, 64))",{"type":17,"tag":25,"props":516,"children":517},{},[518],{"type":23,"value":519},"self.generator.append(ConvBlock(64, 64))",{"type":17,"tag":25,"props":521,"children":522},{},[523],{"type":23,"value":524},"self.generator.append(ConvBlock(64, 32, kernel_size=7))",{"type":17,"tag":25,"props":526,"children":527},{},[528],{"type":23,"value":529},"self.generator.append(",{"type":17,"tag":25,"props":531,"children":532},{},[533],{"type":23,"value":534},"nn.Conv2d(32, 3, kernel_size=1, stride=1, pad_mode='same', padding=0,",{"type":17,"tag":25,"props":536,"children":537},{},[538],{"type":23,"value":539},"weight_init=Normal(mean=0, sigma=0.02), has_bias=has_bias))",{"type":17,"tag":25,"props":541,"children":542},{},[543],{"type":23,"value":544},"self.generator.append(nn.Tanh())",{"type":17,"tag":25,"props":546,"children":547},{},[548],{"type":23,"value":549},"def construct(self, x):",{"type":17,"tag":25,"props":551,"children":552},{},[553],{"type":23,"value":554},"out1 = self.generator(x)",{"type":17,"tag":25,"props":556,"children":557},{},[558],{"type":23,"value":559},"return out1",{"type":17,"tag":25,"props":561,"children":562},{},[563],{"type":17,"tag":29,"props":564,"children":565},{},[566],{"type":23,"value":567},"Discriminator",{"type":17,"tag":25,"props":569,"children":570},{},[571],{"type":23,"value":572},"Discriminator D is actually a binary network model that outputs the probability of determining that an image is a real-life image. It processes the image through a series of Conv2d, LeakyReLU, and InstanceNorm layers, and finally outputs the probability through a Conv2d layer.",{"type":17,"tag":25,"props":574,"children":575},{},[576],{"type":23,"value":408},{"type":17,"tag":25,"props":578,"children":579},{},[580],{"type":23,"value":581},"from mindspore.common.initializer import Normal",{"type":17,"tag":25,"props":583,"children":584},{},[585],{"type":23,"value":586},"class Discriminator(nn.Cell):",{"type":17,"tag":25,"props":588,"children":589},{},[590],{"type":23,"value":591},"\"\"\"AnimeGAN network discriminator\"\"\"",{"type":17,"tag":25,"props":593,"children":594},{},[595],{"type":23,"value":596},"def __init__(self, args):",{"type":17,"tag":25,"props":598,"children":599},{},[600],{"type":23,"value":601},"super(Discriminator, self).__init__()",{"type":17,"tag":25,"props":603,"children":604},{},[605],{"type":23,"value":606},"self.name = f'discriminator_{args.dataset}'",{"type":17,"tag":25,"props":608,"children":609},{},[610],{"type":23,"value":611},"self.has_bias = False",{"type":17,"tag":25,"props":613,"children":614},{},[615],{"type":23,"value":616},"channels = args.ch // 2",{"type":17,"tag":25,"props":618,"children":619},{},[620],{"type":23,"value":621},"layers = [",{"type":17,"tag":25,"props":623,"children":624},{},[625],{"type":23,"value":626},"nn.Conv2d(3, channels, kernel_size=3, stride=1, pad_mode='same', padding=0,",{"type":17,"tag":25,"props":628,"children":629},{},[630],{"type":23,"value":631},"weight_init=Normal(mean=0, sigma=0.02), has_bias=self.has_bias),",{"type":17,"tag":25,"props":633,"children":634},{},[635],{"type":23,"value":636},"nn.LeakyReLU(alpha=0.2)",{"type":17,"tag":25,"props":638,"children":639},{},[640],{"type":23,"value":641},"]",{"type":17,"tag":25,"props":643,"children":644},{},[645],{"type":23,"value":646},"for _ in range(1, args.n_dis):",{"type":17,"tag":25,"props":648,"children":649},{},[650],{"type":23,"value":651},"layers += [",{"type":17,"tag":25,"props":653,"children":654},{},[655],{"type":23,"value":656},"nn.Conv2d(channels, channels * 2, kernel_size=3, stride=2, pad_mode='same', padding=0,",{"type":17,"tag":25,"props":658,"children":659},{},[660],{"type":23,"value":631},{"type":17,"tag":25,"props":662,"children":663},{},[664],{"type":23,"value":665},"nn.LeakyReLU(alpha=0.2),",{"type":17,"tag":25,"props":667,"children":668},{},[669],{"type":23,"value":670},"nn.Conv2d(channels * 2, channels * 4, kernel_size=3, stride=1, pad_mode='same', padding=0,",{"type":17,"tag":25,"props":672,"children":673},{},[674],{"type":23,"value":631},{"type":17,"tag":25,"props":676,"children":677},{},[678],{"type":23,"value":679},"nn.InstanceNorm2d(channels * 4, affine=False),",{"type":17,"tag":25,"props":681,"children":682},{},[683],{"type":23,"value":665},{"type":17,"tag":25,"props":685,"children":686},{},[687],{"type":23,"value":641},{"type":17,"tag":25,"props":689,"children":690},{},[691],{"type":23,"value":692},"channels *= 4",{"type":17,"tag":25,"props":694,"children":695},{},[696],{"type":23,"value":651},{"type":17,"tag":25,"props":698,"children":699},{},[700],{"type":23,"value":701},"nn.Conv2d(channels, channels, kernel_size=3, stride=1, pad_mode='same', padding=0,",{"type":17,"tag":25,"props":703,"children":704},{},[705],{"type":23,"value":631},{"type":17,"tag":25,"props":707,"children":708},{},[709],{"type":23,"value":710},"nn.InstanceNorm2d(channels, affine=False),",{"type":17,"tag":25,"props":712,"children":713},{},[714],{"type":23,"value":665},{"type":17,"tag":25,"props":716,"children":717},{},[718],{"type":23,"value":719},"nn.Conv2d(channels, 1, kernel_size=3, stride=1, pad_mode='same', padding=0,",{"type":17,"tag":25,"props":721,"children":722},{},[723],{"type":23,"value":631},{"type":17,"tag":25,"props":725,"children":726},{},[727],{"type":23,"value":641},{"type":17,"tag":25,"props":729,"children":730},{},[731],{"type":23,"value":732},"self.discriminate = nn.SequentialCell(layers)",{"type":17,"tag":25,"props":734,"children":735},{},[736],{"type":23,"value":549},{"type":17,"tag":25,"props":738,"children":739},{},[740],{"type":23,"value":741},"return self.discriminate(x)",{"type":17,"tag":25,"props":743,"children":744},{},[745],{"type":17,"tag":29,"props":746,"children":747},{},[748],{"type":23,"value":749},"Loss Function",{"type":17,"tag":25,"props":751,"children":752},{},[753],{"type":23,"value":754},"The loss function includes the adversarial loss, content loss, grayscale style loss, and color reconstruction loss. Different losses have different weight coefficients. The overall loss function is expressed as follows:",{"type":17,"tag":25,"props":756,"children":757},{},[758,762,763,767,768],{"type":17,"tag":110,"props":759,"children":761},{"alt":7,"src":760},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/d38c355eaadc4823ab7a83636e1cca13.png",[],{"type":23,"value":394},{"type":17,"tag":110,"props":764,"children":766},{"alt":7,"src":765},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/f641c1eee1224112ab52683afe26f2dd.png",[],{"type":23,"value":394},{"type":17,"tag":110,"props":769,"children":771},{"alt":7,"src":770},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/e0d04ae3c18542f1bedf9e5282344d7a.png",[],{"type":17,"tag":25,"props":773,"children":774},{},[775],{"type":23,"value":776},"import mindspore",{"type":17,"tag":25,"props":778,"children":779},{},[780],{"type":23,"value":781},"from src.losses.gram_loss import GramLoss",{"type":17,"tag":25,"props":783,"children":784},{},[785],{"type":23,"value":786},"from src.losses.color_loss import ColorLoss",{"type":17,"tag":25,"props":788,"children":789},{},[790],{"type":23,"value":791},"from src.losses.vgg19 import Vgg",{"type":17,"tag":25,"props":793,"children":794},{},[795],{"type":23,"value":796},"def vgg19(args, num_classes=1000):",{"type":17,"tag":25,"props":798,"children":799},{},[800],{"type":23,"value":801},"\"\"\"Load the parameters of the pre-trained VGG19 model.\"\"\"",{"type":17,"tag":25,"props":803,"children":804},{},[805],{"type":23,"value":806},"# Build the network.",{"type":17,"tag":25,"props":808,"children":809},{},[810],{"type":23,"value":811},"net = Vgg([64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512], num_classes=num_classes,",{"type":17,"tag":25,"props":813,"children":814},{},[815],{"type":23,"value":816},"batch_norm=True)",{"type":17,"tag":25,"props":818,"children":819},{},[820],{"type":23,"value":821},"# Load the model.",{"type":17,"tag":25,"props":823,"children":824},{},[825],{"type":23,"value":826},"param_dict = load_checkpoint(args.vgg19_path)",{"type":17,"tag":25,"props":828,"children":829},{},[830],{"type":23,"value":831},"load_param_into_net(net, param_dict)",{"type":17,"tag":25,"props":833,"children":834},{},[835],{"type":23,"value":836},"net.requires_grad = False",{"type":17,"tag":25,"props":838,"children":839},{},[840],{"type":23,"value":841},"return net",{"type":17,"tag":25,"props":843,"children":844},{},[845],{"type":23,"value":846},"class GeneratorLoss(nn.Cell):",{"type":17,"tag":25,"props":848,"children":849},{},[850],{"type":23,"value":851},"\"\"\"Connect the generator and loss function.\"\"\"",{"type":17,"tag":25,"props":853,"children":854},{},[855],{"type":23,"value":856},"def __init__(self, discriminator, generator, args):",{"type":17,"tag":25,"props":858,"children":859},{},[860],{"type":23,"value":861},"super(GeneratorLoss, self).__init__(auto_prefix=True)",{"type":17,"tag":25,"props":863,"children":864},{},[865],{"type":23,"value":866},"self.discriminator = discriminator",{"type":17,"tag":25,"props":868,"children":869},{},[870],{"type":23,"value":871},"self.generator = generator",{"type":17,"tag":25,"props":873,"children":874},{},[875],{"type":23,"value":876},"self.content_loss = nn.L1Loss()",{"type":17,"tag":25,"props":878,"children":879},{},[880],{"type":23,"value":881},"self.gram_loss = GramLoss()",{"type":17,"tag":25,"props":883,"children":884},{},[885],{"type":23,"value":886},"self.color_loss = ColorLoss()",{"type":17,"tag":25,"props":888,"children":889},{},[890],{"type":23,"value":891},"self.wadvg = args.wadvg",{"type":17,"tag":25,"props":893,"children":894},{},[895],{"type":23,"value":896},"self.wadvd = args.wadvd",{"type":17,"tag":25,"props":898,"children":899},{},[900],{"type":23,"value":901},"self.wcon = args.wcon",{"type":17,"tag":25,"props":903,"children":904},{},[905],{"type":23,"value":906},"self.wgra = args.wgra",{"type":17,"tag":25,"props":908,"children":909},{},[910],{"type":23,"value":911},"self.wcol = args.wcol",{"type":17,"tag":25,"props":913,"children":914},{},[915],{"type":23,"value":916},"self.vgg19 = vgg19(args)",{"type":17,"tag":25,"props":918,"children":919},{},[920],{"type":23,"value":921},"self.adv_type = args.gan_loss",{"type":17,"tag":25,"props":923,"children":924},{},[925],{"type":23,"value":926},"self.bce_loss = nn.BCELoss()",{"type":17,"tag":25,"props":928,"children":929},{},[930],{"type":23,"value":931},"self.relu = nn.ReLU()",{"type":17,"tag":25,"props":933,"children":934},{},[935],{"type":23,"value":921},{"type":17,"tag":25,"props":937,"children":938},{},[939],{"type":23,"value":940},"def construct(self, img, anime_gray):",{"type":17,"tag":25,"props":942,"children":943},{},[944],{"type":23,"value":945},"\"\"\"Construct the loss calculation structure of the generator.\"\"\"",{"type":17,"tag":25,"props":947,"children":948},{},[949],{"type":23,"value":950},"fake_img = self.generator(img)",{"type":17,"tag":25,"props":952,"children":953},{},[954],{"type":23,"value":955},"fake_d = self.discriminator(fake_img)",{"type":17,"tag":25,"props":957,"children":958},{},[959],{"type":23,"value":960},"fake_feat = self.vgg19(fake_img)",{"type":17,"tag":25,"props":962,"children":963},{},[964],{"type":23,"value":965},"anime_feat = self.vgg19(anime_gray)",{"type":17,"tag":25,"props":967,"children":968},{},[969],{"type":23,"value":970},"img_feat = self.vgg19(img)",{"type":17,"tag":25,"props":972,"children":973},{},[974],{"type":23,"value":975},"result = self.wadvg * self.adv_loss_g(fake_d) + \\",{"type":17,"tag":25,"props":977,"children":978},{},[979],{"type":23,"value":980},"self.wcon * self.content_loss(img_feat, fake_feat) + \\",{"type":17,"tag":25,"props":982,"children":983},{},[984],{"type":23,"value":985},"self.wgra * self.gram_loss(anime_feat, fake_feat) + \\",{"type":17,"tag":25,"props":987,"children":988},{},[989],{"type":23,"value":990},"self.wcol * self.color_loss(img, fake_img)",{"type":17,"tag":25,"props":992,"children":993},{},[994],{"type":23,"value":995},"return result",{"type":17,"tag":25,"props":997,"children":998},{},[999],{"type":23,"value":1000},"def adv_loss_g(self, pred):",{"type":17,"tag":25,"props":1002,"children":1003},{},[1004],{"type":23,"value":1005},"\"\"\"Select a loss function type.\"\"\"",{"type":17,"tag":25,"props":1007,"children":1008},{},[1009],{"type":23,"value":1010},"if self.adv_type == 'hinge':",{"type":17,"tag":25,"props":1012,"children":1013},{},[1014],{"type":23,"value":1015},"return -mindspore.numpy.mean(pred)",{"type":17,"tag":25,"props":1017,"children":1018},{},[1019],{"type":23,"value":1020},"if self.adv_type == 'lsgan':",{"type":17,"tag":25,"props":1022,"children":1023},{},[1024],{"type":23,"value":1025},"return mindspore.numpy.mean(mindspore.numpy.square(pred - 1.0))",{"type":17,"tag":25,"props":1027,"children":1028},{},[1029],{"type":23,"value":1030},"if self.adv_type == 'normal':",{"type":17,"tag":25,"props":1032,"children":1033},{},[1034],{"type":23,"value":1035},"return self.bce_loss(pred, mindspore.numpy.zeros_like(pred))",{"type":17,"tag":25,"props":1037,"children":1038},{},[1039],{"type":23,"value":1025},{"type":17,"tag":25,"props":1041,"children":1042},{},[1043],{"type":17,"tag":29,"props":1044,"children":1045},{},[1046],{"type":23,"value":1047},"Discriminator Loss",{"type":17,"tag":25,"props":1049,"children":1050},{},[1051],{"type":23,"value":1052},"class DiscriminatorLoss(nn.Cell):",{"type":17,"tag":25,"props":1054,"children":1055},{},[1056],{"type":23,"value":1057},"\"\"\"Connect the discriminator and loss function.\"\"\"",{"type":17,"tag":25,"props":1059,"children":1060},{},[1061],{"type":23,"value":856},{"type":17,"tag":25,"props":1063,"children":1064},{},[1065],{"type":23,"value":1066},"nn.Cell.__init__(self, auto_prefix=True)",{"type":17,"tag":25,"props":1068,"children":1069},{},[1070],{"type":23,"value":866},{"type":17,"tag":25,"props":1072,"children":1073},{},[1074],{"type":23,"value":871},{"type":17,"tag":25,"props":1076,"children":1077},{},[1078],{"type":23,"value":876},{"type":17,"tag":25,"props":1080,"children":1081},{},[1082],{"type":23,"value":1083},"self.gram_loss = nn.L1Loss()",{"type":17,"tag":25,"props":1085,"children":1086},{},[1087],{"type":23,"value":886},{"type":17,"tag":25,"props":1089,"children":1090},{},[1091],{"type":23,"value":891},{"type":17,"tag":25,"props":1093,"children":1094},{},[1095],{"type":23,"value":896},{"type":17,"tag":25,"props":1097,"children":1098},{},[1099],{"type":23,"value":901},{"type":17,"tag":25,"props":1101,"children":1102},{},[1103],{"type":23,"value":906},{"type":17,"tag":25,"props":1105,"children":1106},{},[1107],{"type":23,"value":911},{"type":17,"tag":25,"props":1109,"children":1110},{},[1111],{"type":23,"value":916},{"type":17,"tag":25,"props":1113,"children":1114},{},[1115],{"type":23,"value":921},{"type":17,"tag":25,"props":1117,"children":1118},{},[1119],{"type":23,"value":926},{"type":17,"tag":25,"props":1121,"children":1122},{},[1123],{"type":23,"value":931},{"type":17,"tag":25,"props":1125,"children":1126},{},[1127],{"type":23,"value":1128},"def construct(self, img, anime, anime_gray, anime_smt_gray):",{"type":17,"tag":25,"props":1130,"children":1131},{},[1132],{"type":23,"value":1133},"\"\"\"Construct the loss calculation structure of the discriminator.\"\"\"",{"type":17,"tag":25,"props":1135,"children":1136},{},[1137],{"type":23,"value":950},{"type":17,"tag":25,"props":1139,"children":1140},{},[1141],{"type":23,"value":955},{"type":17,"tag":25,"props":1143,"children":1144},{},[1145],{"type":23,"value":1146},"real_anime_d = self.discriminator(anime)",{"type":17,"tag":25,"props":1148,"children":1149},{},[1150],{"type":23,"value":1151},"real_anime_gray_d = self.discriminator(anime_gray)",{"type":17,"tag":25,"props":1153,"children":1154},{},[1155],{"type":23,"value":1156},"real_anime_smt_gray_d = self.discriminator(anime_smt_gray)",{"type":17,"tag":25,"props":1158,"children":1159},{},[1160],{"type":23,"value":1161},"return self.wadvd * (",{"type":17,"tag":25,"props":1163,"children":1164},{},[1165],{"type":23,"value":1166},"1.7 * self.adv_loss_d_real(real_anime_d) +",{"type":17,"tag":25,"props":1168,"children":1169},{},[1170],{"type":23,"value":1171},"1.7 * self.adv_loss_d_fake(fake_d) +",{"type":17,"tag":25,"props":1173,"children":1174},{},[1175],{"type":23,"value":1176},"1.7 * self.adv_loss_d_fake(real_anime_gray_d) +",{"type":17,"tag":25,"props":1178,"children":1179},{},[1180],{"type":23,"value":1181},"1.0 * self.adv_loss_d_fake(real_anime_smt_gray_d)",{"type":17,"tag":25,"props":1183,"children":1184},{},[1185],{"type":23,"value":1186},")",{"type":17,"tag":25,"props":1188,"children":1189},{},[1190],{"type":23,"value":1191},"def adv_loss_d_real(self, pred):",{"type":17,"tag":25,"props":1193,"children":1194},{},[1195],{"type":23,"value":1196},"\"\"\"Loss type of a real animation image\"\"\"",{"type":17,"tag":25,"props":1198,"children":1199},{},[1200],{"type":23,"value":1010},{"type":17,"tag":25,"props":1202,"children":1203},{},[1204],{"type":23,"value":1205},"return mindspore.numpy.mean(self.relu(1.0 - pred))",{"type":17,"tag":25,"props":1207,"children":1208},{},[1209],{"type":23,"value":1020},{"type":17,"tag":25,"props":1211,"children":1212},{},[1213],{"type":23,"value":1025},{"type":17,"tag":25,"props":1215,"children":1216},{},[1217],{"type":23,"value":1030},{"type":17,"tag":25,"props":1219,"children":1220},{},[1221],{"type":23,"value":1222},"return self.bce_loss(pred, mindspore.numpy.ones_like(pred))",{"type":17,"tag":25,"props":1224,"children":1225},{},[1226],{"type":23,"value":1025},{"type":17,"tag":25,"props":1228,"children":1229},{},[1230],{"type":23,"value":1231},"def adv_loss_d_fake(self, pred):",{"type":17,"tag":25,"props":1233,"children":1234},{},[1235],{"type":23,"value":1236},"\"\"\"Loss type of the generated animation image\"\"\"",{"type":17,"tag":25,"props":1238,"children":1239},{},[1240],{"type":23,"value":1010},{"type":17,"tag":25,"props":1242,"children":1243},{},[1244],{"type":23,"value":1245},"return mindspore.numpy.mean(self.relu(1.0 + pred))",{"type":17,"tag":25,"props":1247,"children":1248},{},[1249],{"type":23,"value":1020},{"type":17,"tag":25,"props":1251,"children":1252},{},[1253],{"type":23,"value":1254},"return mindspore.numpy.mean(mindspore.numpy.square(pred))",{"type":17,"tag":25,"props":1256,"children":1257},{},[1258],{"type":23,"value":1030},{"type":17,"tag":25,"props":1260,"children":1261},{},[1262],{"type":23,"value":1035},{"type":17,"tag":25,"props":1264,"children":1265},{},[1266],{"type":23,"value":1254},{"type":17,"tag":25,"props":1268,"children":1269},{},[1270],{"type":17,"tag":29,"props":1271,"children":1272},{},[1273],{"type":23,"value":1274},"Model Implementation",{"type":17,"tag":25,"props":1276,"children":1277},{},[1278,1280,1285],{"type":23,"value":1279},"Due to the particularity of the GAN structure, its loss is the multi-output form of the discriminator and generator, which makes the GAN different from a common classification network. MindSpore requires that operations related to the loss function and optimizer be considered as subclasses of ",{"type":17,"tag":29,"props":1281,"children":1282},{},[1283],{"type":23,"value":1284},"nn.Cell",{"type":23,"value":1286},", so that you can customize the AnimeGAN class to connect the network and the loss function.",{"type":17,"tag":25,"props":1288,"children":1289},{},[1290],{"type":23,"value":1291},"class AnimeGAN(nn.Cell):",{"type":17,"tag":25,"props":1293,"children":1294},{},[1295],{"type":23,"value":1296},"\"\"\"Define the AnimeGAN network.\"\"\"",{"type":17,"tag":25,"props":1298,"children":1299},{},[1300],{"type":23,"value":1301},"def __init__(self, my_train_one_step_cell_for_d, my_train_one_step_cell_for_g):",{"type":17,"tag":25,"props":1303,"children":1304},{},[1305],{"type":23,"value":1306},"super(AnimeGAN, self).__init__(auto_prefix=True)",{"type":17,"tag":25,"props":1308,"children":1309},{},[1310],{"type":23,"value":1311},"self.my_train_one_step_cell_for_g = my_train_one_step_cell_for_g",{"type":17,"tag":25,"props":1313,"children":1314},{},[1315],{"type":23,"value":1316},"self.my_train_one_step_cell_for_d = my_train_one_step_cell_for_d",{"type":17,"tag":25,"props":1318,"children":1319},{},[1320],{"type":23,"value":1128},{"type":17,"tag":25,"props":1322,"children":1323},{},[1324],{"type":23,"value":1325},"output_d_loss = self.my_train_one_step_cell_for_d(img, anime, anime_gray, anime_smt_gray)",{"type":17,"tag":25,"props":1327,"children":1328},{},[1329],{"type":23,"value":1330},"output_g_loss = self.my_train_one_step_cell_for_g(img, anime_gray)",{"type":17,"tag":25,"props":1332,"children":1333},{},[1334],{"type":23,"value":1335},"return output_d_loss, output_g_loss",{"type":17,"tag":25,"props":1337,"children":1338},{},[1339],{"type":17,"tag":29,"props":1340,"children":1341},{},[1342],{"type":23,"value":1343},"Model Training",{"type":17,"tag":25,"props":1345,"children":1346},{},[1347],{"type":23,"value":1348},"Training is divided into two parts: discriminator training and generator training. The discriminator is trained to improve the probability of discriminating real images to the greatest extent. The generator is trained to generate better fake animation images. Both can achieve the optimal results by minimizing the loss function.",{"type":17,"tag":25,"props":1350,"children":1351},{},[1352],{"type":23,"value":232},{"type":17,"tag":25,"props":1354,"children":1355},{},[1356],{"type":23,"value":403},{"type":17,"tag":25,"props":1358,"children":1359},{},[1360],{"type":23,"value":1361},"import cv2",{"type":17,"tag":25,"props":1363,"children":1364},{},[1365],{"type":23,"value":247},{"type":17,"tag":25,"props":1367,"children":1368},{},[1369],{"type":23,"value":776},{"type":17,"tag":25,"props":1371,"children":1372},{},[1373],{"type":23,"value":1374},"from mindspore import Tensor",{"type":17,"tag":25,"props":1376,"children":1377},{},[1378],{"type":23,"value":1379},"from mindspore import float32 as dtype",{"type":17,"tag":25,"props":1381,"children":1382},{},[1383],{"type":23,"value":1384},"from mindspore import nn",{"type":17,"tag":25,"props":1386,"children":1387},{},[1388],{"type":23,"value":1389},"from tqdm import tqdm",{"type":17,"tag":25,"props":1391,"children":1392},{},[1393],{"type":23,"value":1394},"from src.models.generator import Generator",{"type":17,"tag":25,"props":1396,"children":1397},{},[1398],{"type":23,"value":1399},"from src.models.discriminator import Discriminator",{"type":17,"tag":25,"props":1401,"children":1402},{},[1403],{"type":23,"value":1404},"from src.models.animegan import AnimeGAN",{"type":17,"tag":25,"props":1406,"children":1407},{},[1408],{"type":23,"value":1409},"from src.animeganv2_utils.pre_process import denormalize_input",{"type":17,"tag":25,"props":1411,"children":1412},{},[1413],{"type":23,"value":1414},"from src.losses.loss import GeneratorLoss, DiscriminatorLoss",{"type":17,"tag":25,"props":1416,"children":1417},{},[1418],{"type":23,"value":242},{"type":17,"tag":25,"props":1420,"children":1421},{},[1422],{"type":23,"value":252},{"type":17,"tag":25,"props":1424,"children":1425},{},[1426],{"type":23,"value":1427},"parser = argparse.ArgumentParser(description='train')",{"type":17,"tag":25,"props":1429,"children":1430},{},[1431],{"type":23,"value":1432},"parser.add_argument('--device_target', default='Ascend', choices=['CPU', 'GPU', 'Ascend'], type=str)",{"type":17,"tag":25,"props":1434,"children":1435},{},[1436],{"type":23,"value":1437},"parser.add_argument('--device_id', default=0, type=int)",{"type":17,"tag":25,"props":1439,"children":1440},{},[1441],{"type":23,"value":1442},"parser.add_argument('--dataset', default='Paprika', choices=['Hayao', 'Shinkai', 'Paprika'], type=str)",{"type":17,"tag":25,"props":1444,"children":1445},{},[1446],{"type":23,"value":267},{"type":17,"tag":25,"props":1448,"children":1449},{},[1450],{"type":23,"value":1451},"parser.add_argument('--checkpoint_dir', default='./checkpoints', type=str)",{"type":17,"tag":25,"props":1453,"children":1454},{},[1455],{"type":23,"value":1456},"parser.add_argument('--vgg19_path', default='./vgg.ckpt', type=str)",{"type":17,"tag":25,"props":1458,"children":1459},{},[1460],{"type":23,"value":1461},"parser.add_argument('--save_image_dir', default='./images', type=str)",{"type":17,"tag":25,"props":1463,"children":1464},{},[1465],{"type":23,"value":1466},"parser.add_argument('--resume', default=False, type=bool)",{"type":17,"tag":25,"props":1468,"children":1469},{},[1470],{"type":23,"value":1471},"parser.add_argument('--phase', default='train', type=str)",{"type":17,"tag":25,"props":1473,"children":1474},{},[1475],{"type":23,"value":1476},"parser.add_argument('--epochs', default=2, type=int)",{"type":17,"tag":25,"props":1478,"children":1479},{},[1480],{"type":23,"value":1481},"parser.add_argument('--init_epochs', default=5, type=int)",{"type":17,"tag":25,"props":1483,"children":1484},{},[1485],{"type":23,"value":272},{"type":17,"tag":25,"props":1487,"children":1488},{},[1489],{"type":23,"value":282},{"type":17,"tag":25,"props":1491,"children":1492},{},[1493],{"type":23,"value":1494},"parser.add_argument('--save_interval', default=1, type=int)",{"type":17,"tag":25,"props":1496,"children":1497},{},[1498],{"type":23,"value":277},{"type":17,"tag":25,"props":1500,"children":1501},{},[1502],{"type":23,"value":1503},"parser.add_argument('--lr_g', default=2.0e-4, type=float)",{"type":17,"tag":25,"props":1505,"children":1506},{},[1507],{"type":23,"value":1508},"parser.add_argument('--lr_d', default=4.0e-4, type=float)",{"type":17,"tag":25,"props":1510,"children":1511},{},[1512],{"type":23,"value":1513},"parser.add_argument('--init_lr', default=1.0e-3, type=float)",{"type":17,"tag":25,"props":1515,"children":1516},{},[1517],{"type":23,"value":1518},"parser.add_argument('--gan_loss', default='lsgan', choices=['lsgan', 'hinge', 'bce'], type=str)",{"type":17,"tag":25,"props":1520,"children":1521},{},[1522],{"type":23,"value":1523},"parser.add_argument('--wadvg', default=1.7, type=float, help='Adversarial loss weight for G')",{"type":17,"tag":25,"props":1525,"children":1526},{},[1527],{"type":23,"value":1528},"parser.add_argument('--wadvd', default=300, type=float, help='Adversarial loss weight for D')",{"type":17,"tag":25,"props":1530,"children":1531},{},[1532],{"type":23,"value":1533},"parser.add_argument('--wcon', default=1.8, type=float, help='Content loss weight')",{"type":17,"tag":25,"props":1535,"children":1536},{},[1537],{"type":23,"value":1538},"parser.add_argument('--wgra', default=3.0, type=float, help='Gram loss weight')",{"type":17,"tag":25,"props":1540,"children":1541},{},[1542],{"type":23,"value":1543},"parser.add_argument('--wcol', default=10.0, type=float, help='Color loss weight')",{"type":17,"tag":25,"props":1545,"children":1546},{},[1547],{"type":23,"value":1548},"parser.add_argument('--img_ch', default=3, type=int, help='The size of image channel')",{"type":17,"tag":25,"props":1550,"children":1551},{},[1552],{"type":23,"value":1553},"parser.add_argument('--ch', default=64, type=int, help='Base channel number per layer')",{"type":17,"tag":25,"props":1555,"children":1556},{},[1557],{"type":23,"value":1558},"parser.add_argument('--n_dis', default=3, type=int, help='The number of discriminator layer')",{"type":17,"tag":25,"props":1560,"children":1561},{},[1562],{"type":23,"value":287},{"type":17,"tag":25,"props":1564,"children":1565},{},[1566],{"type":23,"value":1567},"# Instantiate the generator and discriminator.",{"type":17,"tag":25,"props":1569,"children":1570},{},[1571],{"type":23,"value":1572},"generator = Generator()",{"type":17,"tag":25,"props":1574,"children":1575},{},[1576],{"type":23,"value":1577},"discriminator = Discriminator(args.ch, args.n_dis)",{"type":17,"tag":25,"props":1579,"children":1580},{},[1581],{"type":23,"value":1582},"# Set up two separate optimizers, one for D and the other for G.",{"type":17,"tag":25,"props":1584,"children":1585},{},[1586],{"type":23,"value":1587},"optimizer_g = nn.Adam(generator.trainable_params(), learning_rate=args.lr_g, beta1=0.5, beta2=0.999)",{"type":17,"tag":25,"props":1589,"children":1590},{},[1591],{"type":23,"value":1592},"optimizer_d = nn.Adam(discriminator.trainable_params(), learning_rate=args.lr_d, beta1=0.5, beta2=0.999)",{"type":17,"tag":25,"props":1594,"children":1595},{},[1596],{"type":23,"value":1597},"# Instantiate WithLossCell.",{"type":17,"tag":25,"props":1599,"children":1600},{},[1601],{"type":23,"value":1602},"net_d_with_criterion = DiscriminatorLoss(discriminator, generator, args)",{"type":17,"tag":25,"props":1604,"children":1605},{},[1606],{"type":23,"value":1607},"net_g_with_criterion = GeneratorLoss(discriminator, generator, args)",{"type":17,"tag":25,"props":1609,"children":1610},{},[1611],{"type":23,"value":1612},"# Instantiate TrainOneStepCell.",{"type":17,"tag":25,"props":1614,"children":1615},{},[1616],{"type":23,"value":1617},"my_train_one_step_cell_for_d = nn.TrainOneStepCell(net_d_with_criterion, optimizer_d)",{"type":17,"tag":25,"props":1619,"children":1620},{},[1621],{"type":23,"value":1622},"my_train_one_step_cell_for_g = nn.TrainOneStepCell(net_g_with_criterion, optimizer_g)",{"type":17,"tag":25,"props":1624,"children":1625},{},[1626],{"type":23,"value":1627},"animegan = AnimeGAN(my_train_one_step_cell_for_d, my_train_one_step_cell_for_g)",{"type":17,"tag":25,"props":1629,"children":1630},{},[1631],{"type":23,"value":1632},"animegan.set_train()",{"type":17,"tag":25,"props":1634,"children":1635},{},[1636],{"type":23,"value":297},{"type":17,"tag":25,"props":1638,"children":1639},{},[1640],{"type":23,"value":302},{"type":17,"tag":25,"props":1642,"children":1643},{},[1644],{"type":23,"value":307},{"type":17,"tag":25,"props":1646,"children":1647},{},[1648],{"type":23,"value":1649},"size = data.get_dataset_size()",{"type":17,"tag":25,"props":1651,"children":1652},{},[1653],{"type":23,"value":1654},"for epoch in range(args.epochs):",{"type":17,"tag":25,"props":1656,"children":1657},{},[1658],{"type":23,"value":1659},"iters = 0",{"type":17,"tag":25,"props":1661,"children":1662},{},[1663],{"type":23,"value":1664},"# Read data for each round of training.",{"type":17,"tag":25,"props":1666,"children":1667},{},[1668],{"type":23,"value":1669},"for img, anime, anime_gray, anime_smt_gray in tqdm(data.create_tuple_iterator()):",{"type":17,"tag":25,"props":1671,"children":1672},{},[1673],{"type":23,"value":1674},"img = Tensor(img, dtype=dtype)",{"type":17,"tag":25,"props":1676,"children":1677},{},[1678],{"type":23,"value":1679},"anime = Tensor(anime, dtype=dtype)",{"type":17,"tag":25,"props":1681,"children":1682},{},[1683],{"type":23,"value":1684},"anime_gray = Tensor(anime_gray, dtype=dtype)",{"type":17,"tag":25,"props":1686,"children":1687},{},[1688],{"type":23,"value":1689},"anime_smt_gray = Tensor(anime_smt_gray, dtype=dtype)",{"type":17,"tag":25,"props":1691,"children":1692},{},[1693],{"type":23,"value":1694},"net_d_loss, net_g_loss = animegan(img, anime, anime_gray, anime_smt_gray)",{"type":17,"tag":25,"props":1696,"children":1697},{},[1698],{"type":23,"value":1699},"if iters % 50 == 0:",{"type":17,"tag":25,"props":1701,"children":1702},{},[1703],{"type":23,"value":1704},"# Output training records.",{"type":17,"tag":25,"props":1706,"children":1707},{},[1708],{"type":23,"value":1709},"print('[%d/%d][%d/%d]\\tLoss_D: %.4f\\tLoss_G: %.4f' % (",{"type":17,"tag":25,"props":1711,"children":1712},{},[1713],{"type":23,"value":1714},"epoch + 1, args.epochs, iters, size, net_d_loss.asnumpy().min(), net_g_loss.asnumpy().min()))",{"type":17,"tag":25,"props":1716,"children":1717},{},[1718],{"type":23,"value":1719},"# After each epoch ends, use the generator to generate a group of images.",{"type":17,"tag":25,"props":1721,"children":1722},{},[1723],{"type":23,"value":1724},"if (epoch % args.save_interval) == 0 and (iters == size - 1):",{"type":17,"tag":25,"props":1726,"children":1727},{},[1728],{"type":23,"value":1729},"stylized = denormalize_input(generator(img)).asnumpy()",{"type":17,"tag":25,"props":1731,"children":1732},{},[1733],{"type":23,"value":1734},"no_stylized = denormalize_input(img).asnumpy()",{"type":17,"tag":25,"props":1736,"children":1737},{},[1738],{"type":23,"value":1739},"imgs = cv2.cvtColor(stylized[0, :, :, :].transpose(1, 2, 0), cv2.COLOR_RGB2BGR)",{"type":17,"tag":25,"props":1741,"children":1742},{},[1743],{"type":23,"value":1744},"imgs1 = cv2.cvtColor(no_stylized[0, :, :, :].transpose(1, 2, 0), cv2.COLOR_RGB2BGR)",{"type":17,"tag":25,"props":1746,"children":1747},{},[1748],{"type":23,"value":1749},"for i in range(1, args.batch_size):",{"type":17,"tag":25,"props":1751,"children":1752},{},[1753],{"type":23,"value":1754},"imgs = np.concatenate(",{"type":17,"tag":25,"props":1756,"children":1757},{},[1758],{"type":23,"value":1759},"(imgs, cv2.cvtColor(stylized[i, :, :, :].transpose(1, 2, 0), cv2.COLOR_RGB2BGR)), axis=1)",{"type":17,"tag":25,"props":1761,"children":1762},{},[1763],{"type":23,"value":1764},"imgs1 = np.concatenate(",{"type":17,"tag":25,"props":1766,"children":1767},{},[1768],{"type":23,"value":1769},"(imgs1, cv2.cvtColor(no_stylized[i, :, :, :].transpose(1, 2, 0), cv2.COLOR_RGB2BGR)), axis=1)",{"type":17,"tag":25,"props":1771,"children":1772},{},[1773],{"type":23,"value":1774},"cv2.imwrite(",{"type":17,"tag":25,"props":1776,"children":1777},{},[1778],{"type":23,"value":1779},"os.path.join(args.save_image_dir, args.dataset, 'epoch_' + str(epoch) + '.jpg'),",{"type":17,"tag":25,"props":1781,"children":1782},{},[1783],{"type":23,"value":1784},"np.concatenate((imgs1, imgs), axis=0))",{"type":17,"tag":25,"props":1786,"children":1787},{},[1788],{"type":23,"value":1789},"# Save the network model parameters as a CKPT file.",{"type":17,"tag":25,"props":1791,"children":1792},{},[1793],{"type":23,"value":1794},"mindspore.save_checkpoint(generator, os.path.join(args.checkpoint_dir, args.dataset,",{"type":17,"tag":25,"props":1796,"children":1797},{},[1798],{"type":23,"value":1799},"'netG_' + str(epoch) + '.ckpt'))",{"type":17,"tag":25,"props":1801,"children":1802},{},[1803],{"type":23,"value":1804},"iters += 1",{"type":17,"tag":25,"props":1806,"children":1807},{},[1808],{"type":23,"value":1809},"Mean(B, G, R) of Paprika are [-22.43617309 -0.19372649 22.62989958]",{"type":17,"tag":25,"props":1811,"children":1812},{},[1813],{"type":23,"value":1814},"Dataset: real 6656 style 1553, smooth 1553",{"type":17,"tag":25,"props":1816,"children":1817},{},[1818],{"type":17,"tag":110,"props":1819,"children":1821},{"alt":7,"src":1820},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/2d44626534a84c2ab36e9fead6e03e51.png",[],{"type":17,"tag":25,"props":1823,"children":1824},{},[1825],{"type":17,"tag":29,"props":1826,"children":1827},{},[1828],{"type":23,"value":1829},"Model Inference",{"type":17,"tag":25,"props":1831,"children":1832},{},[1833],{"type":23,"value":1834},"Run the following code and input a real-life landscape image into the network to generate an animation image:",{"type":17,"tag":25,"props":1836,"children":1837},{},[1838],{"type":23,"value":232},{"type":17,"tag":25,"props":1840,"children":1841},{},[1842],{"type":23,"value":403},{"type":17,"tag":25,"props":1844,"children":1845},{},[1846],{"type":23,"value":1361},{"type":17,"tag":25,"props":1848,"children":1849},{},[1850],{"type":23,"value":1374},{"type":17,"tag":25,"props":1852,"children":1853},{},[1854],{"type":23,"value":1379},{"type":17,"tag":25,"props":1856,"children":1857},{},[1858],{"type":23,"value":1859},"from mindspore import load_checkpoint, load_param_into_net",{"type":17,"tag":25,"props":1861,"children":1862},{},[1863],{"type":23,"value":1864},"from mindspore.train.model import Model",{"type":17,"tag":25,"props":1866,"children":1867},{},[1868],{"type":23,"value":1389},{"type":17,"tag":25,"props":1870,"children":1871},{},[1872],{"type":23,"value":1394},{"type":17,"tag":25,"props":1874,"children":1875},{},[1876],{"type":23,"value":1877},"from src.animeganv2_utils.pre_process import transform, inverse_transform_infer",{"type":17,"tag":25,"props":1879,"children":1880},{},[1881],{"type":23,"value":252},{"type":17,"tag":25,"props":1883,"children":1884},{},[1885],{"type":23,"value":1886},"parser = argparse.ArgumentParser(description='infer')",{"type":17,"tag":25,"props":1888,"children":1889},{},[1890],{"type":23,"value":1432},{"type":17,"tag":25,"props":1892,"children":1893},{},[1894],{"type":23,"value":1437},{"type":17,"tag":25,"props":1896,"children":1897},{},[1898],{"type":23,"value":1899},"parser.add_argument('--infer_dir', default='./dataset/test/real', type=str)",{"type":17,"tag":25,"props":1901,"children":1902},{},[1903],{"type":23,"value":1904},"parser.add_argument('--infer_output', default='./dataset/output', type=str)",{"type":17,"tag":25,"props":1906,"children":1907},{},[1908],{"type":23,"value":1909},"parser.add_argument('--ckpt_file_name', default='./checkpoints/Hayao/netG_30.ckpt', type=str)",{"type":17,"tag":25,"props":1911,"children":1912},{},[1913],{"type":23,"value":287},{"type":17,"tag":25,"props":1915,"children":1916},{},[1917],{"type":23,"value":1918},"# Instantiate the generator.",{"type":17,"tag":25,"props":1920,"children":1921},{},[1922],{"type":23,"value":1923},"net = Generator()",{"type":17,"tag":25,"props":1925,"children":1926},{},[1927],{"type":23,"value":1928},"# Obtain model parameters from the file and load them to the network.",{"type":17,"tag":25,"props":1930,"children":1931},{},[1932],{"type":23,"value":1933},"param_dict = load_checkpoint(args.ckpt_file_name)",{"type":17,"tag":25,"props":1935,"children":1936},{},[1937],{"type":23,"value":831},{"type":17,"tag":25,"props":1939,"children":1940},{},[1941],{"type":23,"value":1942},"data = os.listdir(args.infer_dir)",{"type":17,"tag":25,"props":1944,"children":1945},{},[1946],{"type":23,"value":1947},"bar = tqdm(data)",{"type":17,"tag":25,"props":1949,"children":1950},{},[1951],{"type":23,"value":1952},"model = Model(net)",{"type":17,"tag":25,"props":1954,"children":1955},{},[1956],{"type":23,"value":1957},"if not os.path.exists(args.infer_output):",{"type":17,"tag":25,"props":1959,"children":1960},{},[1961],{"type":23,"value":1962},"os.mkdir(args.infer_output)",{"type":17,"tag":25,"props":1964,"children":1965},{},[1966],{"type":23,"value":1967},"# Read and process images cyclically.",{"type":17,"tag":25,"props":1969,"children":1970},{},[1971],{"type":23,"value":1972},"for img_path in bar:",{"type":17,"tag":25,"props":1974,"children":1975},{},[1976],{"type":23,"value":1977},"img = transform(os.path.join(args.infer_dir, img_path))",{"type":17,"tag":25,"props":1979,"children":1980},{},[1981],{"type":23,"value":1674},{"type":17,"tag":25,"props":1983,"children":1984},{},[1985],{"type":23,"value":1986},"output = model.predict(img)",{"type":17,"tag":25,"props":1988,"children":1989},{},[1990],{"type":23,"value":1991},"img = inverse_transform_infer(img)",{"type":17,"tag":25,"props":1993,"children":1994},{},[1995],{"type":23,"value":1996},"output = inverse_transform_infer(output)",{"type":17,"tag":25,"props":1998,"children":1999},{},[2000],{"type":23,"value":2001},"output = cv2.resize(output, (img.shape[1], img.shape[0]))",{"type":17,"tag":25,"props":2003,"children":2004},{},[2005],{"type":23,"value":2006},"# Save the generated image.",{"type":17,"tag":25,"props":2008,"children":2009},{},[2010],{"type":23,"value":2011},"cv2.imwrite(os.path.join(args.infer_output, img_path), output)",{"type":17,"tag":25,"props":2013,"children":2014},{},[2015],{"type":23,"value":2016},"print('Successfully output images in ' + args.infer_output)",{"type":17,"tag":25,"props":2018,"children":2019},{},[2020],{"type":17,"tag":110,"props":2021,"children":2023},{"alt":7,"src":2022},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/df746430abaf4af9ac232bb7766c841c.png",[],{"type":17,"tag":25,"props":2025,"children":2026},{},[2027],{"type":23,"value":2028},"Model inference results of each style:",{"type":17,"tag":25,"props":2030,"children":2031},{},[2032],{"type":17,"tag":110,"props":2033,"children":2035},{"alt":7,"src":2034},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/f9350ce27efd4cdbb891ae50c5480b2d.png",[],{"type":17,"tag":25,"props":2037,"children":2038},{},[2039],{"type":17,"tag":29,"props":2040,"children":2041},{},[2042],{"type":23,"value":2043},"Video Processing",{"type":17,"tag":25,"props":2045,"children":2046},{},[2047],{"type":23,"value":2048},"In the following method, the format of the input video file is MP4. After the video is processed, the sound is not retained.",{"type":17,"tag":25,"props":2050,"children":2051},{},[2052],{"type":23,"value":232},{"type":17,"tag":25,"props":2054,"children":2055},{},[2056],{"type":23,"value":1361},{"type":17,"tag":25,"props":2058,"children":2059},{},[2060],{"type":23,"value":1374},{"type":17,"tag":25,"props":2062,"children":2063},{},[2064],{"type":23,"value":1379},{"type":17,"tag":25,"props":2066,"children":2067},{},[2068],{"type":23,"value":1859},{"type":17,"tag":25,"props":2070,"children":2071},{},[2072],{"type":23,"value":1864},{"type":17,"tag":25,"props":2074,"children":2075},{},[2076],{"type":23,"value":1389},{"type":17,"tag":25,"props":2078,"children":2079},{},[2080],{"type":23,"value":1394},{"type":17,"tag":25,"props":2082,"children":2083},{},[2084],{"type":23,"value":2085},"from src.animeganv2_utils.adjust_brightness import adjust_brightness_from_src_to_dst",{"type":17,"tag":25,"props":2087,"children":2088},{},[2089],{"type":23,"value":2090},"from src.animeganv2_utils.pre_process import preprocessing, convert_image, inverse_image",{"type":17,"tag":25,"props":2092,"children":2093},{},[2094],{"type":23,"value":2095},"# Load parameters. Set video_input and video_output to the actual input and output video paths, and select an inference model for video_ckpt_file_name.",{"type":17,"tag":25,"props":2097,"children":2098},{},[2099],{"type":23,"value":2100},"parser = argparse.ArgumentParser(description='video2anime')",{"type":17,"tag":25,"props":2102,"children":2103},{},[2104],{"type":23,"value":2105},"parser.add_argument('--device_target', default='GPU', choices=['CPU', 'GPU', 'Ascend'], type=str)",{"type":17,"tag":25,"props":2107,"children":2108},{},[2109],{"type":23,"value":1437},{"type":17,"tag":25,"props":2111,"children":2112},{},[2113],{"type":23,"value":2114},"parser.add_argument('--video_ckpt_file_name', default='./checkpoints/Hayao/netG_30.ckpt', type=str)",{"type":17,"tag":25,"props":2116,"children":2117},{},[2118],{"type":23,"value":2119},"parser.add_argument('--video_input', default='./video/test.mp4', type=str)",{"type":17,"tag":25,"props":2121,"children":2122},{},[2123],{"type":23,"value":2124},"parser.add_argument('--video_output', default='./video/output.mp4', type=str)",{"type":17,"tag":25,"props":2126,"children":2127},{},[2128],{"type":23,"value":2129},"parser.add_argument('--output_format', default='mp4v', type=str)",{"type":17,"tag":25,"props":2131,"children":2132},{},[2133],{"type":23,"value":2134},"parser.add_argument('--img_size', default=[256, 256], type=list, help='The size of image: H and W')",{"type":17,"tag":25,"props":2136,"children":2137},{},[2138],{"type":23,"value":287},{"type":17,"tag":25,"props":2140,"children":2141},{},[2142],{"type":23,"value":1918},{"type":17,"tag":25,"props":2144,"children":2145},{},[2146],{"type":23,"value":1923},{"type":17,"tag":25,"props":2148,"children":2149},{},[2150],{"type":23,"value":2151},"param_dict = load_checkpoint(args.video_ckpt_file_name)",{"type":17,"tag":25,"props":2153,"children":2154},{},[2155],{"type":23,"value":2156},"# Read the video file.",{"type":17,"tag":25,"props":2158,"children":2159},{},[2160],{"type":23,"value":2161},"vid = cv2.VideoCapture(args.video_input)",{"type":17,"tag":25,"props":2163,"children":2164},{},[2165],{"type":23,"value":2166},"total = int(vid.get(cv2.CAP_PROP_FRAME_COUNT))",{"type":17,"tag":25,"props":2168,"children":2169},{},[2170],{"type":23,"value":2171},"fps = int(vid.get(cv2.CAP_PROP_FPS))",{"type":17,"tag":25,"props":2173,"children":2174},{},[2175],{"type":23,"value":2176},"codec = cv2.VideoWriter_fourcc(*args.output_format)",{"type":17,"tag":25,"props":2178,"children":2179},{},[2180],{"type":23,"value":1928},{"type":17,"tag":25,"props":2182,"children":2183},{},[2184],{"type":23,"value":831},{"type":17,"tag":25,"props":2186,"children":2187},{},[2188],{"type":23,"value":1952},{"type":17,"tag":25,"props":2190,"children":2191},{},[2192],{"type":23,"value":2193},"ret, img = vid.read()",{"type":17,"tag":25,"props":2195,"children":2196},{},[2197],{"type":23,"value":2198},"img = preprocessing(img, args.img_size)",{"type":17,"tag":25,"props":2200,"children":2201},{},[2202],{"type":23,"value":2203},"height, width = img.shape[:2]",{"type":17,"tag":25,"props":2205,"children":2206},{},[2207],{"type":23,"value":2208},"# Set the resolution of the output video.",{"type":17,"tag":25,"props":2210,"children":2211},{},[2212],{"type":23,"value":2213},"out = cv2.VideoWriter(args.video_output, codec, fps, (width, height))",{"type":17,"tag":25,"props":2215,"children":2216},{},[2217],{"type":23,"value":2218},"pbar = tqdm(total=total)",{"type":17,"tag":25,"props":2220,"children":2221},{},[2222],{"type":23,"value":2223},"vid.set(cv2.CAP_PROP_POS_FRAMES, 0)",{"type":17,"tag":25,"props":2225,"children":2226},{},[2227],{"type":23,"value":2228},"# Process video frames.",{"type":17,"tag":25,"props":2230,"children":2231},{},[2232],{"type":23,"value":2233},"while ret:",{"type":17,"tag":25,"props":2235,"children":2236},{},[2237],{"type":23,"value":2238},"ret, frame = vid.read()",{"type":17,"tag":25,"props":2240,"children":2241},{},[2242],{"type":23,"value":2243},"if frame is None:",{"type":17,"tag":25,"props":2245,"children":2246},{},[2247],{"type":23,"value":2248},"print('Warning: got empty frame.')",{"type":17,"tag":25,"props":2250,"children":2251},{},[2252],{"type":23,"value":2253},"continue",{"type":17,"tag":25,"props":2255,"children":2256},{},[2257],{"type":23,"value":2258},"img = convert_image(frame, args.img_size)",{"type":17,"tag":25,"props":2260,"children":2261},{},[2262],{"type":23,"value":1674},{"type":17,"tag":25,"props":2264,"children":2265},{},[2266],{"type":23,"value":2267},"fake_img = model.predict(img).asnumpy()",{"type":17,"tag":25,"props":2269,"children":2270},{},[2271],{"type":23,"value":2272},"fake_img = inverse_image(fake_img)",{"type":17,"tag":25,"props":2274,"children":2275},{},[2276],{"type":23,"value":2277},"fake_img = adjust_brightness_from_src_to_dst(fake_img, cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))",{"type":17,"tag":25,"props":2279,"children":2280},{},[2281],{"type":23,"value":2282},"# Save the video file.",{"type":17,"tag":25,"props":2284,"children":2285},{},[2286],{"type":23,"value":2287},"out.write(cv2.cvtColor(fake_img, cv2.COLOR_BGR2RGB))",{"type":17,"tag":25,"props":2289,"children":2290},{},[2291],{"type":23,"value":2292},"pbar.update(1)",{"type":17,"tag":25,"props":2294,"children":2295},{},[2296],{"type":23,"value":2297},"pbar.close()",{"type":17,"tag":25,"props":2299,"children":2300},{},[2301],{"type":23,"value":2302},"vid.release()",{"type":17,"tag":25,"props":2304,"children":2305},{},[2306],{"type":17,"tag":29,"props":2307,"children":2308},{},[2309],{"type":23,"value":2310},"Algorithm Process",{"type":17,"tag":25,"props":2312,"children":2313},{},[2314],{"type":17,"tag":110,"props":2315,"children":2317},{"alt":7,"src":2316},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/7af16e00e210408fa50be3d586edcb36.png",[],{"type":17,"tag":25,"props":2319,"children":2320},{},[2321],{"type":17,"tag":29,"props":2322,"children":2323},{},[2324],{"type":23,"value":2325},"References",{"type":17,"tag":25,"props":2327,"children":2328},{},[2329],{"type":23,"value":2330},"[1] Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2414-2423). [2] Johnson, J., Alahi, A., & Fei-Fei, L. (2016, October). Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision (pp. 694-711). Springer, Cham. [3] Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., & Yang, M. H. (2017). Diversified texture synthesis with feed-forward networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3920-3928). [4] Chen, Y., Lai, Y. K., & Liu, Y. J. (2018). Cartoongan: Generative adversarial networks for photo cartoonization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9465-9474). [5] Li, Y., Liu, M. Y., Li, X., Yang, M. H., & Kautz, J. (2018). A closed-form solution to photorealistic image stylization. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 453-468).",{"type":17,"tag":25,"props":2332,"children":2333},{},[2334,2336,2344],{"type":23,"value":2335},"For more MindSpore application cases, visit ",{"type":17,"tag":2337,"props":2338,"children":2342},"a",{"href":2339,"rel":2340},"https://www.mindspore.cn/en",[2341],"nofollow",[2343],{"type":23,"value":2339},{"type":23,"value":2345},".",{"title":7,"searchDepth":2347,"depth":2347,"links":2348},4,[],"markdown","content:technology-blogs:en:2765.md","content","technology-blogs/en/2765.md","technology-blogs/en/2765","md",1776506107278]