[{"data":1,"prerenderedAt":594},["ShallowReactive",2],{"content-query-WnL1lqjDP9":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":588,"_id":589,"_source":590,"_file":591,"_stem":592,"_extension":593},"/technology-blogs/en/3023","en",false,"","Project Introduction | MindSpore-based Malaria Detection - Interpretation of Malaria Pathological Sections","This blog introduces a MindSpore-based malaria detection project.","2024-02-19","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/8b856c79b0444b2fbc7b6f6eafd97dff.png","technology-blogs",{"type":14,"children":15,"toc":581},"root",[16,24,30,39,44,49,54,62,70,78,83,91,96,104,112,117,122,127,132,140,145,152,157,164,169,179,188,203,210,256,263,272,291,302,312,317,324,353,361,368,375,382,397,411,418,423,433,447,454,459,485,497,502,507,512,517,525,530,535,540,548,559,570],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"project-introduction-mindspore-based-malaria-detection-interpretation-of-malaria-pathological-sections",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"Author: Re Dai Yu Source: Zhihu",{"type":17,"tag":25,"props":31,"children":32},{},[33],{"type":17,"tag":34,"props":35,"children":36},"strong",{},[37],{"type":23,"value":38},"Abstract",{"type":17,"tag":25,"props":40,"children":41},{},[42],{"type":23,"value":43},"Malaria is widespread globally, affecting approximately 40% of the world's population. Particularly, it prevails in regions such as Africa, Southeast Asia, Central Asia, and Central and South America. In Africa, malaria is a severe health concern, with around 500 million people residing in malaria-endemic areas. Globally, 100 million individuals are diagnosed with and die of malaria annually, and 90% of these cases occur in Africa.",{"type":17,"tag":25,"props":45,"children":46},{},[47],{"type":23,"value":48},"In many parts of Africa, poverty persists, and inadequate sanitation and medical facilities hinder effective disease identification, including malaria. With the rapid development of artificial intelligence, computer-aided medical image analysis has been booming and supplemented clinical treatments in underdeveloped areas. Leveraging machine learning and deep learning, computer-aided malaria diagnosis is now achievable.",{"type":17,"tag":25,"props":50,"children":51},{},[52],{"type":23,"value":53},"By using thin blood smears from malaria patients as datasets, training models can be developed to determine whether an individual has malaria. Specifically, image analysis is performed on the to-be-tested sample to determine whether this sample belongs to a malaria patient, so as to implement quick diagnosis of malaria. This technology is particularly beneficial in disadvantaged regions as it helps to fill the gaps in local medical resources and personnel, ensuring accurate and efficient malaria diagnosis and assisting in follow-up treatments.",{"type":17,"tag":25,"props":55,"children":56},{},[57],{"type":17,"tag":34,"props":58,"children":59},{},[60],{"type":23,"value":61},"01 Project Design",{"type":17,"tag":25,"props":63,"children":64},{},[65],{"type":17,"tag":34,"props":66,"children":67},{},[68],{"type":23,"value":69},"1.1 Model Principles",{"type":17,"tag":25,"props":71,"children":72},{},[73],{"type":17,"tag":34,"props":74,"children":75},{},[76],{"type":23,"value":77},"1.1.1 Introduction",{"type":17,"tag":25,"props":79,"children":80},{},[81],{"type":23,"value":82},"The model used in this project is Vision Transformer (ViT), a Transformer-based model proposed by the Google's team in 2020 for image classification. ViT has been adopted for a various vision tasks due to its simplicity, efficiency, and scalability, making it a benchmark in the computer vision fields. ViT is ideal for natural language processing and computer vision. It can achieve outstanding performance on image classification tasks without depending on convolution operations.",{"type":17,"tag":25,"props":84,"children":85},{},[86],{"type":17,"tag":34,"props":87,"children":88},{},[89],{"type":23,"value":90},"1.1.2 Model Structure",{"type":17,"tag":25,"props":92,"children":93},{},[94],{"type":23,"value":95},"The main structure of ViT is developed on the encoder part of the Transformer model, but the sequence of some structures is adjusted. For example, the position of Normalization is different from that of the standard Transformer. For details about its structure, see the following figure.",{"type":17,"tag":25,"props":97,"children":98},{},[99],{"type":17,"tag":100,"props":101,"children":103},"img",{"alt":7,"src":102},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/d868b1bbcf2649188524cc73a44a9f85.png",[],{"type":17,"tag":25,"props":105,"children":106},{},[107],{"type":17,"tag":34,"props":108,"children":109},{},[110],{"type":23,"value":111},"1.1.3 Model Features",{"type":17,"tag":25,"props":113,"children":114},{},[115],{"type":23,"value":116},"ViT is mainly applied to the image classification field. Compared with the conventional Transformer model, it has the following features:",{"type":17,"tag":25,"props":118,"children":119},{},[120],{"type":23,"value":121},"· After a source image of a dataset is divided into a plurality of patches, a two-dimensional patch (without considering channels) is converted into a one-dimensional vector. This vector, a category vector, and a location vector all form a model input.",{"type":17,"tag":25,"props":123,"children":124},{},[125],{"type":23,"value":126},"· Although its block structure is slightly different from that of Transformer, as aforementioned, its main structure is still the Multi-head Attention structure.",{"type":17,"tag":25,"props":128,"children":129},{},[130],{"type":23,"value":131},"· A fully connected layer follows stacked blocks are stacked and accepts the output of the category vector as the input for classification. Generally, the last fully connected layer is called a head, and the Transformer encoder is called backbone.",{"type":17,"tag":25,"props":133,"children":134},{},[135],{"type":17,"tag":34,"props":136,"children":137},{},[138],{"type":23,"value":139},"1.1.4 Model Parsing",{"type":17,"tag":25,"props":141,"children":142},{},[143],{"type":23,"value":144},"The Transformer model is firstly proposed in a 2017-published paper. The encoder-decoder structure based on the Attention mechanism proposed in the paper has achieved great success in the field of natural language processing. The model structure is shown as the following figure.",{"type":17,"tag":25,"props":146,"children":147},{},[148],{"type":17,"tag":100,"props":149,"children":151},{"alt":7,"src":150},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/8e890060e8514bc393f0ae152b6e34da.png",[],{"type":17,"tag":25,"props":153,"children":154},{},[155],{"type":23,"value":156},"The main structure consists of multiple encoders and decoders. The following figure shows their detailed structures.",{"type":17,"tag":25,"props":158,"children":159},{},[160],{"type":17,"tag":100,"props":161,"children":163},{"alt":7,"src":162},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/947af94069024fc0ac9206ea2533bcf1.png",[],{"type":17,"tag":25,"props":165,"children":166},{},[167],{"type":23,"value":168},"The encoder and decoder consist of many modules, such as the Multi-Head Attention layer, Feed Forward layer, Normalization layer, and Residual Connection (\"Add\" in the figure). And the most important structure is the Multi-Head Attention structure, which originates from the Self-Attention mechanism and consists of multiple Self-Attentions in parallel.",{"type":17,"tag":170,"props":171,"children":173},"h3",{"id":172},"_12-model-training-and-effect",[174],{"type":17,"tag":34,"props":175,"children":176},{},[177],{"type":23,"value":178},"1.2 Model Training and Effect",{"type":17,"tag":170,"props":180,"children":182},{"id":181},"_121-model-training",[183],{"type":17,"tag":34,"props":184,"children":185},{},[186],{"type":23,"value":187},"1.2.1 Model Training",{"type":17,"tag":25,"props":189,"children":190},{},[191,193,201],{"type":23,"value":192},"The training dataset used by the model originates from National Library of Medicine. To download the dataset, visit ",{"type":17,"tag":194,"props":195,"children":199},"a",{"href":196,"rel":197},"https://lhncbc.nlm.nih.gov/LHC-research/LHC-projects/image-processing/malaria-datasheet.html",[198],"nofollow",[200],{"type":23,"value":196},{"type":23,"value":202},". After you download the dataset, preprocess it based on the MindSpore dataset. Then, create a ViT model after importing data. For details about the model build process, see the following figure.",{"type":17,"tag":25,"props":204,"children":205},{},[206],{"type":17,"tag":100,"props":207,"children":209},{"alt":7,"src":208},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/a114bb19bde94b5980303523a9162e45.png",[],{"type":17,"tag":25,"props":211,"children":212},{},[213,215,220,222,227,229,234,236,241,243,248,249,254],{"type":23,"value":214},"It takes a long time to train a complete ViT model. In actual practices, you are advised to adjust the epoch size based on project requirements. Due to the limited computing power of this project, ",{"type":17,"tag":34,"props":216,"children":217},{},[218],{"type":23,"value":219},"epoch_size",{"type":23,"value":221}," is set to ",{"type":17,"tag":34,"props":223,"children":224},{},[225],{"type":23,"value":226},"10",{"type":23,"value":228},", ",{"type":17,"tag":34,"props":230,"children":231},{},[232],{"type":23,"value":233},"momentum",{"type":23,"value":235}," to ",{"type":17,"tag":34,"props":237,"children":238},{},[239],{"type":23,"value":240},"0.9",{"type":23,"value":242},", and ",{"type":17,"tag":34,"props":244,"children":245},{},[246],{"type":23,"value":247},"num_classes",{"type":23,"value":235},{"type":17,"tag":34,"props":250,"children":251},{},[252],{"type":23,"value":253},"1000",{"type":23,"value":255},". The following figure shows the training process.",{"type":17,"tag":25,"props":257,"children":258},{},[259],{"type":17,"tag":100,"props":260,"children":262},{"alt":7,"src":261},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/7242d613780a4a12b0a8b81121302f9e.png",[],{"type":17,"tag":170,"props":264,"children":266},{"id":265},"_222-model-effect-evaluation",[267],{"type":17,"tag":34,"props":268,"children":269},{},[270],{"type":23,"value":271},"2.2.2 Model Effect Evaluation",{"type":17,"tag":25,"props":273,"children":274},{},[275,277,282,284,289],{"type":23,"value":276},"1. The ",{"type":17,"tag":34,"props":278,"children":279},{},[280],{"type":23,"value":281},"Top_1_Accuracy",{"type":23,"value":283}," and ",{"type":17,"tag":34,"props":285,"children":286},{},[287],{"type":23,"value":288},"Top_5_Accuracy",{"type":23,"value":290}," evaluation indicators commonly used in the industry are used to evaluate the model performance.",{"type":17,"tag":25,"props":292,"children":293},{},[294,296,300],{"type":23,"value":295},"· ",{"type":17,"tag":34,"props":297,"children":298},{},[299],{"type":23,"value":281},{"type":23,"value":301}," measures the accuracy between the category that ranks first and the actual result. That is, the label associated with the highest probability in the final probability vector serves as the prediction result. If the category with the largest probability in the prediction result is correct, the prediction is correct. Otherwise, the prediction is incorrect.",{"type":17,"tag":25,"props":303,"children":304},{},[305,306,310],{"type":23,"value":295},{"type":17,"tag":34,"props":307,"children":308},{},[309],{"type":23,"value":288},{"type":23,"value":311}," measures the accuracy of the top 5 categories that contain the actual result. That is, among the top 5 categories with the largest probability vector, the prediction is correct as long as the correct probability occurs. Otherwise, the prediction is incorrect.",{"type":17,"tag":25,"props":313,"children":314},{},[315],{"type":23,"value":316},"The following lists the model evaluation results of this project.",{"type":17,"tag":25,"props":318,"children":319},{},[320],{"type":17,"tag":100,"props":321,"children":323},{"alt":7,"src":322},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/3b4b663f62224f63b7f76e634a215f61.png",[],{"type":17,"tag":25,"props":325,"children":326},{},[327,329,333,335,340,341,345,346,351],{"type":23,"value":328},"As shown in the preceding figure, ",{"type":17,"tag":34,"props":330,"children":331},{},[332],{"type":23,"value":281},{"type":23,"value":334}," is ",{"type":17,"tag":34,"props":336,"children":337},{},[338],{"type":23,"value":339},"0.8081",{"type":23,"value":283},{"type":17,"tag":34,"props":342,"children":343},{},[344],{"type":23,"value":288},{"type":23,"value":334},{"type":17,"tag":34,"props":347,"children":348},{},[349],{"type":23,"value":350},"1.0",{"type":23,"value":352},", indicating that the model has excellent accuracy.",{"type":17,"tag":25,"props":354,"children":355},{},[356],{"type":17,"tag":34,"props":357,"children":358},{},[359],{"type":23,"value":360},"2. Display of Output Images",{"type":17,"tag":25,"props":362,"children":363},{},[364],{"type":17,"tag":100,"props":365,"children":367},{"alt":7,"src":366},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/3209e3cb23d24941b079618f5727aa5c.png",[],{"type":17,"tag":25,"props":369,"children":370},{},[371],{"type":17,"tag":100,"props":372,"children":374},{"alt":7,"src":373},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/33cee2e579384ebeae03ef5e8b973f11.png",[],{"type":17,"tag":25,"props":376,"children":377},{},[378],{"type":17,"tag":100,"props":379,"children":381},{"alt":7,"src":380},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/f739f45dc47f456e9f860a398b2b0a00.png",[],{"type":17,"tag":25,"props":383,"children":384},{},[385,390,392],{"type":17,"tag":34,"props":386,"children":387},{},[388],{"type":23,"value":389},"02",{"type":23,"value":391}," ",{"type":17,"tag":34,"props":393,"children":394},{},[395],{"type":23,"value":396},"MindSpore Installation",{"type":17,"tag":25,"props":398,"children":399},{},[400,402,409],{"type":23,"value":401},"Search for a proper computer system version on the ",{"type":17,"tag":194,"props":403,"children":406},{"href":404,"rel":405},"http://mindspore.cn/install/",[198],[407],{"type":23,"value":408},"MindSpore official website",{"type":23,"value":410},".",{"type":17,"tag":25,"props":412,"children":413},{},[414],{"type":17,"tag":100,"props":415,"children":417},{"alt":7,"src":416},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/0820b513128b43ce9d17201f2889d5f7.png",[],{"type":17,"tag":25,"props":419,"children":420},{},[421],{"type":23,"value":422},"Note that the version used in this project is 2.1.0. The following is the pip installation command:",{"type":17,"tag":424,"props":425,"children":427},"pre",{"code":426},"$pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.1.0/MindSpore/\n        unified/x86_64/mindspore-2.1.0-cp38-cp38-linux_x86_64.whl --trusted-host\n         ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple\n",[428],{"type":17,"tag":429,"props":430,"children":431},"code",{"__ignoreMap":7},[432],{"type":23,"value":426},{"type":17,"tag":25,"props":434,"children":435},{},[436,441,442],{"type":17,"tag":34,"props":437,"children":438},{},[439],{"type":23,"value":440},"03",{"type":23,"value":391},{"type":17,"tag":34,"props":443,"children":444},{},[445],{"type":23,"value":446},"Application of the Project",{"type":17,"tag":25,"props":448,"children":449},{},[450],{"type":17,"tag":100,"props":451,"children":453},{"alt":7,"src":452},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2024/03/15/45d9abb786ea44508cca475394189dcb.png",[],{"type":17,"tag":25,"props":455,"children":456},{},[457],{"type":23,"value":458},"● The structure of this project is shown in the preceding figure.",{"type":17,"tag":25,"props":460,"children":461},{},[462,464,469,471,476,478,483],{"type":23,"value":463},"The datasets, including the test dataset (",{"type":17,"tag":34,"props":465,"children":466},{},[467],{"type":23,"value":468},"test",{"type":23,"value":470},") and training dataset (",{"type":17,"tag":34,"props":472,"children":473},{},[474],{"type":23,"value":475},"train",{"type":23,"value":477},") are stored in the ",{"type":17,"tag":34,"props":479,"children":480},{},[481],{"type":23,"value":482},"data",{"type":23,"value":484}," folder.",{"type":17,"tag":25,"props":486,"children":487},{},[488,490,495],{"type":23,"value":489},"● The ",{"type":17,"tag":34,"props":491,"children":492},{},[493],{"type":23,"value":494},"ViT",{"type":23,"value":496}," folder contains trained models displayed on the web page.",{"type":17,"tag":25,"props":498,"children":499},{},[500],{"type":23,"value":501},"● The python files include a test program, training program, and verification program.",{"type":17,"tag":25,"props":503,"children":504},{},[505],{"type":23,"value":506},"● If you are a beginner, remember to modify the file address before using it. Generally, there is a high probability that an error is reported because the environment configuration is incorrect (i.e. incorrect version) or the file address is incorrectly modified (i.e. address not modified or the slash (/) and backslash (\\) not changed).",{"type":17,"tag":25,"props":508,"children":509},{},[510],{"type":23,"value":511},"● Feel free to ask any further questions.",{"type":17,"tag":25,"props":513,"children":514},{},[515],{"type":23,"value":516},"● This project is completely open-source based on the MIT protocol. To be concise, you are allowed to close the source code after it is modified by other developers. You do not need to describe the modified files. And you can use the name of the original author for marketing for further software development.",{"type":17,"tag":25,"props":518,"children":519},{},[520],{"type":17,"tag":34,"props":521,"children":522},{},[523],{"type":23,"value":524},"04 Summary",{"type":17,"tag":25,"props":526,"children":527},{},[528],{"type":23,"value":529},"MindSpore is used to implement data processing, model training, and model inference of this project. Top_1_Accuracy=0.8081 and Top_5_Accuracy=1.0 demonstrate that the model has optimal accuracy. In addition, when an image is input for inference, the prediction result can be viewed.",{"type":17,"tag":25,"props":531,"children":532},{},[533],{"type":23,"value":534},"Through the interpretation of this deep learning-based malaria pathological analysis project, how to build a deep learning-related computer environment and create a ViT model is also elaborated on.",{"type":17,"tag":25,"props":536,"children":537},{},[538],{"type":23,"value":539},"This project can be used on MindSpore-powered hardware developer board for developing inference applications. For details, see the official MindSpore documentation.",{"type":17,"tag":25,"props":541,"children":542},{},[543],{"type":17,"tag":34,"props":544,"children":545},{},[546],{"type":23,"value":547},"References",{"type":17,"tag":25,"props":549,"children":550},{},[551,553],{"type":23,"value":552},"[1]",{"type":17,"tag":194,"props":554,"children":557},{"href":555,"rel":556},"https://www.mindspore.cn/tutorials/application/en/r2.1/cv/vit.html",[198],[558],{"type":23,"value":555},{"type":17,"tag":25,"props":560,"children":561},{},[562,564],{"type":23,"value":563},"[2]",{"type":17,"tag":194,"props":565,"children":568},{"href":566,"rel":567},"https://www.mindspore.cn/tutorials/application/en/r2.0/cv/vit.html",[198],[569],{"type":23,"value":566},{"type":17,"tag":25,"props":571,"children":572},{},[573,575],{"type":23,"value":574},"[3]",{"type":17,"tag":194,"props":576,"children":579},{"href":577,"rel":578},"https://www.mindspore.cn/install/en",[198],[580],{"type":23,"value":577},{"title":7,"searchDepth":582,"depth":582,"links":583},4,[584,586,587],{"id":172,"depth":585,"text":178},3,{"id":181,"depth":585,"text":187},{"id":265,"depth":585,"text":271},"markdown","content:technology-blogs:en:3023.md","content","technology-blogs/en/3023.md","technology-blogs/en/3023","md",1776506109571]