[{"data":1,"prerenderedAt":793},["ShallowReactive",2],{"content-query-SitGqhEctI":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":787,"_id":788,"_source":789,"_file":790,"_stem":791,"_extension":792},"/news/en/2766","en",false,"","Project Introduction | MindSpore Implementation of a Capsule Neural Network-Based Image Captioning Algorithm","Image captioning involves the processing of visual information and generating statements that comply with human habits, which correspond to two major disciplines of AI: computer vision and natural language processing.","2022-12-15","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/2b08812ce7a34d4e9349ce07faec8ad5.png","news",{"type":14,"children":15,"toc":784},"root",[16,24,30,35,48,57,65,70,78,83,91,99,104,111,116,121,128,136,141,149,168,173,178,183,188,193,198,203,208,213,218,223,228,233,241,274,279,284,289,294,299,303,308,313,318,323,328,332,337,341,346,351,356,360,365,370,374,379,384,388,393,397,462,480,484,489,494,498,502,507,512,516,521,526,531,536,541,546,551,556,561,566,571,576,581,586,591,596,601,606,611,616,621,626,631,636,641,646,651,656,664,704,709,714,719,724,729,734,739,744,749,754,759,764,772,777],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"project-introduction-mindspore-implementation-of-a-capsule-neural-network-based-image-captioning-algorithm",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"Author: Liu Han, Liu Yuanqiu | Organization: School of Software, Dalian University of Technology",{"type":17,"tag":25,"props":31,"children":32},{},[33],{"type":23,"value":34},"Project: MindSpore Implementation of a Capsule Neural Network-Based Image Captioning Algorithm",{"type":17,"tag":25,"props":36,"children":37},{},[38,40],{"type":23,"value":39},"Project Link: ",{"type":17,"tag":41,"props":42,"children":46},"a",{"href":43,"rel":44},"https://github.com/Liu-Yuanqiu/acn_mindspore",[45],"nofollow",[47],{"type":23,"value":43},{"type":17,"tag":25,"props":49,"children":50},{},[51],{"type":17,"tag":52,"props":53,"children":54},"strong",{},[55],{"type":23,"value":56},"Overview",{"type":17,"tag":25,"props":58,"children":59},{},[60],{"type":17,"tag":52,"props":61,"children":62},{},[63],{"type":23,"value":64},"1.1 Image Captioning",{"type":17,"tag":25,"props":66,"children":67},{},[68],{"type":23,"value":69},"Humans can easily use languages to describe what they see, but computers have a hard time doing so. The task of image captioning is to teach computers how to describe what they see. This involves the processing of visual information and generating statements that comply with human habits, which correspond to two major disciplines of AI: computer vision and natural language processing. Image captioning is not only of great significance in algorithm research but also has a wide range of applications in scenarios such as blind assistance and image-text conversion.",{"type":17,"tag":25,"props":71,"children":72},{},[73],{"type":17,"tag":52,"props":74,"children":75},{},[76],{"type":23,"value":77},"1.2 Capsule Network",{"type":17,"tag":25,"props":79,"children":80},{},[81],{"type":23,"value":82},"In a convolutional neural network, layers are locally connected and share parameters, but the associations and location relationships between features are not considered. In a capsule neural network (capsnet), the spatial information and the probability of an object are encoded into a capsule vector, which is then normalized by a non-linear activation function that leaves the direction of the vector unchanged and dynamically routed to a higher-level capsule. In this way, the capsnet learns useful features and their relationships.",{"type":17,"tag":25,"props":84,"children":85},{},[86],{"type":17,"tag":87,"props":88,"children":90},"img",{"alt":7,"src":89},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/a39a96eae4944368b9d6caabe45392f8.png",[],{"type":17,"tag":25,"props":92,"children":93},{},[94],{"type":17,"tag":52,"props":95,"children":96},{},[97],{"type":23,"value":98},"Network Structure",{"type":17,"tag":25,"props":100,"children":101},{},[102],{"type":23,"value":103},"An image captioning algorithm usually uses an encoder-decoder structure. As shown in the figure, the encoder extracts visual features of a picture, captures relationships between the visual features by using multiple attention mechanisms, including a bilinear pooling module and an attentive capsule module, and generates an output. The bilinear pooling module obtains second-order interactions between features by performing squash-reward operations on the features. The attentive capsule module considers each visual feature as a capsule so as to capture positional relationships between the features. The decoder generates a corresponding word by using a recurrent neural network based on a visual feature. The decoder uses a recurrent neural network to generate words corresponding to the visual features and decodes the words in an autoregressive manner to compose a final description.",{"type":17,"tag":25,"props":105,"children":106},{},[107],{"type":17,"tag":87,"props":108,"children":110},{"alt":7,"src":109},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/e649ea8ca02d4575b88f8bba38b07532.png",[],{"type":17,"tag":25,"props":112,"children":113},{},[114],{"type":23,"value":115},"Framework of a capsnet-based image captioning algorithm",{"type":17,"tag":25,"props":117,"children":118},{},[119],{"type":23,"value":120},"The algorithm uses cross-entropy loss to supervise training. The loss function is expressed as:",{"type":17,"tag":25,"props":122,"children":123},{},[124],{"type":17,"tag":87,"props":125,"children":127},{"alt":7,"src":126},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/e6a4b345943d46afaab7d53bdea1e874.png",[],{"type":17,"tag":25,"props":129,"children":130},{},[131],{"type":17,"tag":52,"props":132,"children":133},{},[134],{"type":23,"value":135},"Development Process",{"type":17,"tag":25,"props":137,"children":138},{},[139],{"type":23,"value":140},"In the specific implementation process, we first adapt the dataset, develop the network models, and then compose the training code. MindSpore supports the basic operators and provides simple APIs. Both underlying matrix calculation and high-level network model encapsulation are perfectly supported, greatly facilitating development and debugging.",{"type":17,"tag":25,"props":142,"children":143},{},[144],{"type":17,"tag":52,"props":145,"children":146},{},[147],{"type":23,"value":148},"3.1 Adapting the Dataset",{"type":17,"tag":25,"props":150,"children":151},{},[152,154,159,161,166],{"type":23,"value":153},"Image captioning involves the processing of images and text, which correspond to multiple types of training data formats. The image features are a matrix preprocessed by Faster RCNN. The vocabulary is stored in a TXT file, and the descriptions are stored in an array. We use a custom dataset to generate training batches. First, define a data class, implement ",{"type":17,"tag":52,"props":155,"children":156},{},[157],{"type":23,"value":158},"__getitem__",{"type":23,"value":160}," in the class as an iterable object to return data, and use ",{"type":17,"tag":52,"props":162,"children":163},{},[164],{"type":23,"value":165},"GeneratorDataset",{"type":23,"value":167}," to assemble the data class for shuffling and batching.",{"type":17,"tag":25,"props":169,"children":170},{},[171],{"type":23,"value":172},"coco_train_set = CocoDataset(",{"type":17,"tag":25,"props":174,"children":175},{},[176],{"type":23,"value":177},"image_ids_path = os.path.join(args.dataset_path, 'txt', 'coco_train_image_id.txt'),",{"type":17,"tag":25,"props":179,"children":180},{},[181],{"type":23,"value":182},"input_seq = os.path.join(args.dataset_path, 'sent', 'coco_train_input.pkl'),",{"type":17,"tag":25,"props":184,"children":185},{},[186],{"type":23,"value":187},"target_seq = os.path.join(args.dataset_path, 'sent', 'coco_train_target.pkl'),",{"type":17,"tag":25,"props":189,"children":190},{},[191],{"type":23,"value":192},"att_feats_folder = os.path.join(args.dataset_path, 'feature', 'up_down_36'),",{"type":17,"tag":25,"props":194,"children":195},{},[196],{"type":23,"value":197},"seq_per_img = args.seq_per_img,",{"type":17,"tag":25,"props":199,"children":200},{},[201],{"type":23,"value":202},"max_feat_num = -1)",{"type":17,"tag":25,"props":204,"children":205},{},[206],{"type":23,"value":207},"dataset_train = ds.GeneratorDataset(coco_train_set,",{"type":17,"tag":25,"props":209,"children":210},{},[211],{"type":23,"value":212},"column_names=[\"indices\", \"input_seq\", \"target_seq\", \"att_feats\"],",{"type":17,"tag":25,"props":214,"children":215},{},[216],{"type":23,"value":217},"shuffle=True,",{"type":17,"tag":25,"props":219,"children":220},{},[221],{"type":23,"value":222},"python_multiprocessing=True,",{"type":17,"tag":25,"props":224,"children":225},{},[226],{"type":23,"value":227},"num_parallel_workers=args.works)",{"type":17,"tag":25,"props":229,"children":230},{},[231],{"type":23,"value":232},"dataset_train = dataset_train.batch(args.batch_size, drop_remainder=True)",{"type":17,"tag":25,"props":234,"children":235},{},[236],{"type":17,"tag":52,"props":237,"children":238},{},[239],{"type":23,"value":240},"3.2 Developing the Network Models",{"type":17,"tag":25,"props":242,"children":243},{},[244,246,251,253,258,260,265,267,272],{"type":23,"value":245},"Network models are developed based on the ",{"type":17,"tag":52,"props":247,"children":248},{},[249],{"type":23,"value":250},"nn.Cell",{"type":23,"value":252}," class. The overall model class ",{"type":17,"tag":52,"props":254,"children":255},{},[256],{"type":23,"value":257},"CapsuleXlan",{"type":23,"value":259},", ",{"type":17,"tag":52,"props":261,"children":262},{},[263],{"type":23,"value":264},"Encoder",{"type":23,"value":266},", and ",{"type":17,"tag":52,"props":268,"children":269},{},[270],{"type":23,"value":271},"Decoder",{"type":23,"value":273}," are implemented based on different functions of the network, involving the linear layer, activation layer, matrix operations, and tensor operations. The network structure is defined as follows:",{"type":17,"tag":25,"props":275,"children":276},{},[277],{"type":23,"value":278},"class CapsuleXlan(nn.Cell):",{"type":17,"tag":25,"props":280,"children":281},{},[282],{"type":23,"value":283},"def __init__():",{"type":17,"tag":25,"props":285,"children":286},{},[287],{"type":23,"value":288},"self.encoder = Encoder()",{"type":17,"tag":25,"props":290,"children":291},{},[292],{"type":23,"value":293},"self.decoder = Decoder()",{"type":17,"tag":25,"props":295,"children":296},{},[297],{"type":23,"value":298},"class Encoder(nn.Cell):",{"type":17,"tag":25,"props":300,"children":301},{},[302],{"type":23,"value":283},{"type":17,"tag":25,"props":304,"children":305},{},[306],{"type":23,"value":307},"self.encoder = nn.CellList([])",{"type":17,"tag":25,"props":309,"children":310},{},[311],{"type":23,"value":312},"for _ in range(layer_num):",{"type":17,"tag":25,"props":314,"children":315},{},[316],{"type":23,"value":317},"sublayer = CapsuleLowRankLayer()",{"type":17,"tag":25,"props":319,"children":320},{},[321],{"type":23,"value":322},"self.encoder.append(sublayer)",{"type":17,"tag":25,"props":324,"children":325},{},[326],{"type":23,"value":327},"class Decoder(nn.Cell):",{"type":17,"tag":25,"props":329,"children":330},{},[331],{"type":23,"value":283},{"type":17,"tag":25,"props":333,"children":334},{},[335],{"type":23,"value":336},"self.decoder = nn.CellList([])",{"type":17,"tag":25,"props":338,"children":339},{},[340],{"type":23,"value":312},{"type":17,"tag":25,"props":342,"children":343},{},[344],{"type":23,"value":345},"sublayer = LowRankLayer()",{"type":17,"tag":25,"props":347,"children":348},{},[349],{"type":23,"value":350},"self.decoder.append(sublayer)",{"type":17,"tag":25,"props":352,"children":353},{},[354],{"type":23,"value":355},"class CapsuleLowRankLayer(nn.Cell):",{"type":17,"tag":25,"props":357,"children":358},{},[359],{"type":23,"value":283},{"type":17,"tag":25,"props":361,"children":362},{},[363],{"type":23,"value":364},"self.attn_net = Capsule()",{"type":17,"tag":25,"props":366,"children":367},{},[368],{"type":23,"value":369},"class LowRankLayer(nn.Cell):",{"type":17,"tag":25,"props":371,"children":372},{},[373],{"type":23,"value":283},{"type":17,"tag":25,"props":375,"children":376},{},[377],{"type":23,"value":378},"self.attn_net = SCAtt()",{"type":17,"tag":25,"props":380,"children":381},{},[382],{"type":23,"value":383},"class SCAtt(nn.Cell):",{"type":17,"tag":25,"props":385,"children":386},{},[387],{"type":23,"value":283},{"type":17,"tag":25,"props":389,"children":390},{},[391],{"type":23,"value":392},"class Capsule(nn.Cell):",{"type":17,"tag":25,"props":394,"children":395},{},[396],{"type":23,"value":283},{"type":17,"tag":25,"props":398,"children":399},{},[400,404,406,410,412,416,418,422,424,428,430,434,436,441,443,447,449,454,456,460],{"type":17,"tag":52,"props":401,"children":402},{},[403],{"type":23,"value":257},{"type":23,"value":405}," contains ",{"type":17,"tag":52,"props":407,"children":408},{},[409],{"type":23,"value":264},{"type":23,"value":411}," and ",{"type":17,"tag":52,"props":413,"children":414},{},[415],{"type":23,"value":271},{"type":23,"value":417},". ",{"type":17,"tag":52,"props":419,"children":420},{},[421],{"type":23,"value":264},{"type":23,"value":423}," encodes visual information and ",{"type":17,"tag":52,"props":425,"children":426},{},[427],{"type":23,"value":271},{"type":23,"value":429}," generates descriptions. ",{"type":17,"tag":52,"props":431,"children":432},{},[433],{"type":23,"value":264},{"type":23,"value":435}," contains multiple ",{"type":17,"tag":52,"props":437,"children":438},{},[439],{"type":23,"value":440},"CapsuleLowRankLayer",{"type":23,"value":442}," layers. Each ",{"type":17,"tag":52,"props":444,"children":445},{},[446],{"type":23,"value":440},{"type":23,"value":448}," processes features, inputs the processed features to ",{"type":17,"tag":52,"props":450,"children":451},{},[452],{"type":23,"value":453},"Capsule",{"type":23,"value":455}," for calculation, and returns the processed results to the upper layer. This rule applies to ",{"type":17,"tag":52,"props":457,"children":458},{},[459],{"type":23,"value":271},{"type":23,"value":461}," as well.",{"type":17,"tag":25,"props":463,"children":464},{},[465,467,471,473,478],{"type":23,"value":466},"For specific operations, take ",{"type":17,"tag":52,"props":468,"children":469},{},[470],{"type":23,"value":271},{"type":23,"value":472}," as an example. First, we define the sublayer and the corresponding linear layer, implement the operator operation to be used in advance, and then use the predefined layers in ",{"type":17,"tag":52,"props":474,"children":475},{},[476],{"type":23,"value":477},"construct",{"type":23,"value":479}," to process the input. The calculation result is returned after the linear layer and layer normalization.",{"type":17,"tag":25,"props":481,"children":482},{},[483],{"type":23,"value":327},{"type":17,"tag":25,"props":485,"children":486},{},[487],{"type":23,"value":488},"def __init__(self, layer_num, embed_dim, att_heads, att_mid_dim, att_mid_drop):",{"type":17,"tag":25,"props":490,"children":491},{},[492],{"type":23,"value":493},"super(Decoder, self).__init__()",{"type":17,"tag":25,"props":495,"children":496},{},[497],{"type":23,"value":336},{"type":17,"tag":25,"props":499,"children":500},{},[501],{"type":23,"value":312},{"type":17,"tag":25,"props":503,"children":504},{},[505],{"type":23,"value":506},"sublayer = LowRankLayer(embed_dim=embed_dim, att_heads=8,",{"type":17,"tag":25,"props":508,"children":509},{},[510],{"type":23,"value":511},"att_mid_dim=[128, 64, 128], att_mid_drop=0.9)",{"type":17,"tag":25,"props":513,"children":514},{},[515],{"type":23,"value":350},{"type":17,"tag":25,"props":517,"children":518},{},[519],{"type":23,"value":520},"self.proj = nn.Dense(embed_dim * (layer_num + 1), embed_dim)",{"type":17,"tag":25,"props":522,"children":523},{},[524],{"type":23,"value":525},"self.layer_norm = nn.LayerNorm([embed_dim])",{"type":17,"tag":25,"props":527,"children":528},{},[529],{"type":23,"value":530},"self.concat_last = ops.Concat(-1)",{"type":17,"tag":25,"props":532,"children":533},{},[534],{"type":23,"value":535},"def construct(self, gv_feat, att_feats, att_mask):",{"type":17,"tag":25,"props":537,"children":538},{},[539],{"type":23,"value":540},"batch_size = att_feats.shape[0]",{"type":17,"tag":25,"props":542,"children":543},{},[544],{"type":23,"value":545},"feat_arr = [gv_feat]",{"type":17,"tag":25,"props":547,"children":548},{},[549],{"type":23,"value":550},"for i, decoder_layer in enumerate(self.decoder):",{"type":17,"tag":25,"props":552,"children":553},{},[554],{"type":23,"value":555},"gv_feat = decoder_layer(gv_feat, att_feats, att_mask,",{"type":17,"tag":25,"props":557,"children":558},{},[559],{"type":23,"value":560},"gv_feat, att_feats)",{"type":17,"tag":25,"props":562,"children":563},{},[564],{"type":23,"value":565},"feat_arr.append(gv_feat)",{"type":17,"tag":25,"props":567,"children":568},{},[569],{"type":23,"value":570},"gv_feat = self.concat_last(feat_arr)",{"type":17,"tag":25,"props":572,"children":573},{},[574],{"type":23,"value":575},"gv_feat = self.proj(gv_feat)",{"type":17,"tag":25,"props":577,"children":578},{},[579],{"type":23,"value":580},"gv_feat = self.layer_norm(gv_feat)",{"type":17,"tag":25,"props":582,"children":583},{},[584],{"type":23,"value":585},"return gv_feat, att_feats",{"type":17,"tag":25,"props":587,"children":588},{},[589],{"type":23,"value":590},"We then implement the cross-entropy loss and assemble it with the network model into a class.",{"type":17,"tag":25,"props":592,"children":593},{},[594],{"type":23,"value":595},"class CapsuleXlanWithLoss(nn.Cell):",{"type":17,"tag":25,"props":597,"children":598},{},[599],{"type":23,"value":600},"def __init__(self, model):",{"type":17,"tag":25,"props":602,"children":603},{},[604],{"type":23,"value":605},"super(CapsuleXlanWithLoss, self).__init__()",{"type":17,"tag":25,"props":607,"children":608},{},[609],{"type":23,"value":610},"self.model = model",{"type":17,"tag":25,"props":612,"children":613},{},[614],{"type":23,"value":615},"self.ce = nn.SoftmaxCrossEntropyWithLogits(sparse=True)",{"type":17,"tag":25,"props":617,"children":618},{},[619],{"type":23,"value":620},"def construct(self, indices, input_seq, target_seq, att_feats):",{"type":17,"tag":25,"props":622,"children":623},{},[624],{"type":23,"value":625},"logit = self.model(input_seq, att_feats)",{"type":17,"tag":25,"props":627,"children":628},{},[629],{"type":23,"value":630},"logit = logit.view((-1, logit.shape[-1]))",{"type":17,"tag":25,"props":632,"children":633},{},[634],{"type":23,"value":635},"target_seq = target_seq.view((-1))",{"type":17,"tag":25,"props":637,"children":638},{},[639],{"type":23,"value":640},"mask = (target_seq > -1).astype(\"float32\")",{"type":17,"tag":25,"props":642,"children":643},{},[644],{"type":23,"value":645},"loss = self.ce(logit, target_seq)",{"type":17,"tag":25,"props":647,"children":648},{},[649],{"type":23,"value":650},"loss = ops.ReduceSum(False)(loss * mask) / mask.sum()",{"type":17,"tag":25,"props":652,"children":653},{},[654],{"type":23,"value":655},"return loss",{"type":17,"tag":25,"props":657,"children":658},{},[659],{"type":17,"tag":52,"props":660,"children":661},{},[662],{"type":23,"value":663},"3.3 Performing Training",{"type":17,"tag":25,"props":665,"children":666},{},[667,669,674,676,681,683,688,690,695,697,702],{"type":23,"value":668},"The dataset and network model have been prepared. In the training process, we only need to assemble the model, optimizer, dataset, and callback function. Here, we define an optimizer that uses the Adam optimizer, define the callback function, and use ",{"type":17,"tag":52,"props":670,"children":671},{},[672],{"type":23,"value":673},"LossMonitor",{"type":23,"value":675}," to monitor the loss function, ",{"type":17,"tag":52,"props":677,"children":678},{},[679],{"type":23,"value":680},"timeMonitor",{"type":23,"value":682}," to calculate the time used in each step, ",{"type":17,"tag":52,"props":684,"children":685},{},[686],{"type":23,"value":687},"ModelCheckpoint",{"type":23,"value":689}," to save the model, and ",{"type":17,"tag":52,"props":691,"children":692},{},[693],{"type":23,"value":694},"SummaryCollector",{"type":23,"value":696}," to save the visualized data. Finally, ",{"type":17,"tag":52,"props":698,"children":699},{},[700],{"type":23,"value":701},"nn.Model",{"type":23,"value":703}," is used to assemble the four parts for training, and MindSpore Insight can be used to observe the training loss and parameter changes.",{"type":17,"tag":25,"props":705,"children":706},{},[707],{"type":23,"value":708},"net = CapsuleXlan()",{"type":17,"tag":25,"props":710,"children":711},{},[712],{"type":23,"value":713},"net = CapsuleXlanWithLoss(net)",{"type":17,"tag":25,"props":715,"children":716},{},[717],{"type":23,"value":718},"warmup_lr = nn.WarmUpLR(args.lr, args.warmup)",{"type":17,"tag":25,"props":720,"children":721},{},[722],{"type":23,"value":723},"optim = nn.Adam(params=net.trainable_params(), learning_rate=warmup_lr, beta1=0.9, beta2=0.98, eps=1.0e-9)",{"type":17,"tag":25,"props":725,"children":726},{},[727],{"type":23,"value":728},"model = ms.Model(network=net, optimizer=optim)",{"type":17,"tag":25,"props":730,"children":731},{},[732],{"type":23,"value":733},"loss_cb = LossMonitor(per_print_times=1)",{"type":17,"tag":25,"props":735,"children":736},{},[737],{"type":23,"value":738},"time_cb = TimeMonitor(data_size=step_per_epoch)",{"type":17,"tag":25,"props":740,"children":741},{},[742],{"type":23,"value":743},"ckpoint_cb = ModelCheckpoint(prefix='ACN',",{"type":17,"tag":25,"props":745,"children":746},{},[747],{"type":23,"value":748},"directory=os.path.join(args.result_folder, 'checkpoints'))",{"type":17,"tag":25,"props":750,"children":751},{},[752],{"type":23,"value":753},"summary_cb = SummaryCollector(summary_dir=os.path.join(args.result_folder, 'summarys'))",{"type":17,"tag":25,"props":755,"children":756},{},[757],{"type":23,"value":758},"cbs = [loss_cb, time_cb, ckpoint_cb, summary_cb]",{"type":17,"tag":25,"props":760,"children":761},{},[762],{"type":23,"value":763},"model.train(epoch=args.epochs, train_dataset=dataset_train, callbacks=cbs)",{"type":17,"tag":25,"props":765,"children":766},{},[767],{"type":17,"tag":52,"props":768,"children":769},{},[770],{"type":23,"value":771},"Model Effect Evaluation",{"type":17,"tag":25,"props":773,"children":774},{},[775],{"type":23,"value":776},"The model is trained and tested on the Ascend basic software and hardware platform. Ascend 910 is used as the training device, and MindSpore, driven by CANN, is used as the framework. The loss changes are monitored and visualized by MindSpore Insight in real time. Nearly a million data records are used for training, achieving good training and inference effects, and generating complete and accurate descriptions for images.",{"type":17,"tag":25,"props":778,"children":779},{},[780],{"type":17,"tag":87,"props":781,"children":783},{"alt":7,"src":782},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/09/14/6b03f1437b0140f0aab2c30f91c9a13b.png",[],{"title":7,"searchDepth":785,"depth":785,"links":786},4,[],"markdown","content:news:en:2766.md","content","news/en/2766.md","news/en/2766","md",1776506045918]