[{"data":1,"prerenderedAt":463},["ShallowReactive",2],{"content-query-MWN2AQuUcp":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":457,"_id":458,"_source":459,"_file":460,"_stem":461,"_extension":462},"/news/zh/268","zh",false,"","案例 | 手写汉字训练和识别系统","哈尔滨工业大学苏统华老师团队基于MindSpore实现","2020-10-16","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/98c5fca63459477cb38d8c9cf1ecfde8.png","news",{"type":14,"children":15,"toc":454},"root",[16,24,30,38,43,48,55,60,65,70,77,82,87,94,99,104,111,116,121,130,135,142,147,152,159,164,169,176,181,186,193,198,205,210,217,222,229,234,241,246,253,258,263,268,275,280,285,292,305,310,317,322,329,334,341,346,351,358,363,368,378,383,390,395,400,408,413,420,425,432,437,442,449],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"案例-手写汉字训练和识别系统",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"一、简介 本项目是属于手写体文字识别应用，旨在基于MindSpore AI计算框架和Atlas实现手写汉字拍照识别系统。该系统能够对写在纸上的多个汉字，使用摄像头拍摄视频，实时检测字符区域并给出识别类别。该系统包括手写汉字模型训练（云上）、模型转换、模型部署、摄像头图像采集、模型推理（端侧）、结果展示等完整训练和应用流程。其中模型采用的是深度神经网络，目前深度学习在文字识别方面有着广泛的应用，多分类问题是其中重要的一类。然而，深层网络模型的结构通常很复杂，对于一般的多类别分类任务，所需的深度网络参数通常随着类别数量的增加而呈现超线性增长。本项目需要识别字库中字的类别数高达3755类，模型训练是整个流程中耗时最长且决定识别精度的重要环节，模型推理在识别流程中占据较大部分。所以这两个环节的速度和精度对于用户体验至关重要。如何研究高性能、高精度、实用性强的方案变得极具挑战性。 二、项目开发目的和意义 手写汉字是几乎所有国人的最自然技能之一。日常生活中有很多场景需要对写在纸上的汉字进行识别或者录入电脑。如下图是一位同事咨询该字如何发音，如果能够识别它就能达到认识它的目的。",{"type":17,"tag":25,"props":31,"children":32},{},[33],{"type":17,"tag":34,"props":35,"children":37},"img",{"alt":7,"src":36},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/110b38bfad504756b7f39f1ceca232e0.png",[],{"type":17,"tag":25,"props":39,"children":40},{},[41],{"type":23,"value":42},"生活中遇到的手写汉字",{"type":17,"tag":25,"props":44,"children":45},{},[46],{"type":23,"value":47},"本项目是属于手写体文字识别应用，在华为云ModelArts平台上基于MindSpore AI计算框架进行手写汉字模型的训练，然后通过ATC模型转化工具和ACL接口，将模型部署到Atlas上，实现手写汉字拍照识别系统。该系统能够对写在纸上的多个汉字，使用摄像头捕获视频/图像，实时检测手写文字区域并给出识别类别。本项目的单字版使用MindSpore AI计算框架在3755类手写汉字数据集上进行模型训练，然后将训练好的模型转换为Ascend310支持的离线推理模型，在华为Atlas上借助摄像头对少量手写汉字进行实时检测和识别，具有完整性、代表性和实用性，满足了在实际场景下用摄像头进行文字的拍照感知、实时检测和识别的需求。 三、系统设计 系统可以划分为数据处理、模型构建、文字实时感知等三个主要子系统，各子系统相对独立，但存在数据关联。其中数据处理包括手写汉字数据集划分、新数据集制作、图像增强等字符图像预处理；模型构建和训练包括网络定义、模型训练等模块；文字实时感知包括视频解析、单字检测、图像预处理和字符识别与展示。为了说明各模块之间的结构关系，细化的整体结构图如下图所示。系统的各模块将在给出系统整体设计流程之后进行详细介绍。",{"type":17,"tag":25,"props":49,"children":50},{},[51],{"type":17,"tag":34,"props":52,"children":54},{"alt":7,"src":53},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/213e93e43c4840aea8a8385694d0b0c7.png",[],{"type":17,"tag":25,"props":56,"children":57},{},[58],{"type":23,"value":59},"系统整体功能结构图",{"type":17,"tag":25,"props":61,"children":62},{},[63],{"type":23,"value":64},"3.1 基于MindSpore的系统流程",{"type":17,"tag":25,"props":66,"children":67},{},[68],{"type":23,"value":69},"整个系统流程分为两个阶段。训练阶段在HITHCD-2018数据子集上借助MindSpore生成定制版ResNet模型。推断阶段包括摄像头图像采集、字符检测、图像预处理、文字识别等模块，具体流程见下图所示。",{"type":17,"tag":25,"props":71,"children":72},{},[73],{"type":17,"tag":34,"props":74,"children":76},{"alt":7,"src":75},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/181ea0a597a94e6d8ae503bb890c993c.png",[],{"type":17,"tag":25,"props":78,"children":79},{},[80],{"type":23,"value":81},"基于MindSpore训练的系统流程图",{"type":17,"tag":25,"props":83,"children":84},{},[85],{"type":23,"value":86},"3.2 手写数据集 3.2.1数据集介绍 本项目使用了HITHCD-2018数据集的子数据集。HITHCD-2018是哈尔滨工业大学收集的、用于手写汉字识别（HCCR）的大型数据库，有超过5346名书写者书写，是目前规模最大、字类最多的数据库（Tonghua Su et al. HITHCD–2018: Handwritten Chinese Character Database of 21K-Category, ICDAR, 2019）。我们使用子数据集，共563,250个样本，它覆盖了3755个类别的国标第一级字符（GB2312-1980 Level 1）。其中训练数据中为每个字符类提供了120个样本，测试集每类提供30个样本，后者可以用于超参数验证。 3.2.2数据集制作 在MindSpore中经常使用的数据类型是mindrecord，不是常见的jpg,jpeg,png,tif等格式。比起单张图片，它具有I/O效率高、支持多线程并发读写、节省内存、语义完全符合ACID性等特点。由于本项目所用HITHCD数据集存储形式为gnt，我们需要对其进行格式转换，输出成mindrecord文件。gnt的存储形式如下图所示，前4个字节是当前图片所占的字节数，紧跟的2个字节是图片对应标签的ASCII编码，再往后4个字节分别是宽和高，最后是图片具体信息，如此往复。",{"type":17,"tag":25,"props":88,"children":89},{},[90],{"type":17,"tag":34,"props":91,"children":93},{"alt":7,"src":92},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/b6e3c0eb5c0c4726ae9f36c2f9166a84.png",[],{"type":17,"tag":25,"props":95,"children":96},{},[97],{"type":23,"value":98},"gnt文件存储情况",{"type":17,"tag":25,"props":100,"children":101},{},[102],{"type":23,"value":103},"生成mindrecord的流程如下图所示。为了更高效地制作数据集，我们在制作mindrecord之前，在buffer中先对图像进行打乱和预处理，预处理的具体操作将在后面板块介绍。得到已经预处理完成的gnt格式数据后，再使用tensorflow制作mindrecord数据集。",{"type":17,"tag":25,"props":105,"children":106},{},[107],{"type":17,"tag":34,"props":108,"children":110},{"alt":7,"src":109},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/46359d706ec8435bb106acf94d40386a.png",[],{"type":17,"tag":25,"props":112,"children":113},{},[114],{"type":23,"value":115},"制作mindrecord流程图",{"type":17,"tag":25,"props":117,"children":118},{},[119],{"type":23,"value":120},"3.3 图像预处理 图像的亮度、对比度等属性对识别的影响很大，书写的同个汉字在不同环境下也有不同。然而，在识别问题中，这些因素不应该影响最后的识别结果。为了尽可能减少无关因素的影响，我们对原始数据进行了预处理和增强，提高了网络的泛化能力。该部分流程如下图所示：",{"type":17,"tag":25,"props":122,"children":123},{},[124,128],{"type":17,"tag":34,"props":125,"children":127},{"alt":7,"src":126},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/b24b0e1d9aa94fa58b0df46b8ebeae97.png",[],{"type":23,"value":129},"数据增强流程",{"type":17,"tag":25,"props":131,"children":132},{},[133],{"type":23,"value":134},"大津法二值化主要是利用最大类间方差，将图片分为前景和背景两部分。本项目中，它的目的是为了保持手写汉字灰度不变，将背景统一为纯白底色，增加识别的鲁棒性。调用threshold(img, img, 0, 255, THRESH_TOZERO | THRESH_OTSU)函数，可实现项目需求。如示意图所示。",{"type":17,"tag":25,"props":136,"children":137},{},[138],{"type":17,"tag":34,"props":139,"children":141},{"alt":7,"src":140},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/b974a85e28ac4c75a0a3f20e1e9d764d.png",[],{"type":17,"tag":25,"props":143,"children":144},{},[145],{"type":23,"value":146},"大津法二值化效果图(左为原始图像，右为大津法校正图像)",{"type":17,"tag":25,"props":148,"children":149},{},[150],{"type":23,"value":151},"灰度均衡法由Cheng-Lin Liu, Fei Yin等人在“Online and offline handwritten Chinese character recognition: Benchmarking on new databases”中提出，目的是为了尽可能使得训练样本汉字灰度值相近，提高识别的准确率。对于给定的像素值在0到255之间的汉字样本，首先进行灰度均值计算，若大于110，即样本图片更接近白色、笔画颜色偏浅，则对其进行笔画增粗、增黑，其前后效果如图所示。",{"type":17,"tag":25,"props":153,"children":154},{},[155],{"type":17,"tag":34,"props":156,"children":158},{"alt":7,"src":157},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/3617b823921c4c29924371b1146e5ddb.png",[],{"type":17,"tag":25,"props":160,"children":161},{},[162],{"type":23,"value":163},"灰度均衡效果图(左为原始图像，右为灰度均衡校正图像)",{"type":17,"tag":25,"props":165,"children":166},{},[167],{"type":23,"value":168},"对于给定的MindSpore网络，训练的样本需为统一尺寸。因此，在预处理过程中，还需要对汉字进行居中padding和大小归一化。该部分的主要步骤为:1.根据长宽比，将汉字resize到尽可能接近目标尺寸；2.采用邻接线性插值法，将样本padding为正方形。如图所示的“知”字，我们将原有68*72大小处理为了112*112的标准图片，并采用cvtColor(img, img, COLOR_GRAY2BGR)将其转为单通道，完成了整个预处理流程。",{"type":17,"tag":25,"props":170,"children":171},{},[172],{"type":17,"tag":34,"props":173,"children":175},{"alt":7,"src":174},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/8820513ce06645419446207022e4d073.png",[],{"type":17,"tag":25,"props":177,"children":178},{},[179],{"type":23,"value":180},"居中padding及归一化示意图(左为原始图像，右为预处理后标准图像)",{"type":17,"tag":25,"props":182,"children":183},{},[184],{"type":23,"value":185},"3.4 文字检测 针对onCameraFrame里内容进行单字检测，整个过程见文字检测流程示意图，关键步骤的效果见单字检测关键过程的示意图。首先考虑到摄像头中真实场景的复杂背景信息，以及OpenCV有限的区域提取能力，我们固定手写汉字颜色为红色，以简化轮廓提取难度。因为红色在BGR颜色空间是不连续的，将图片转为HSV颜色空间进行颜色过滤操作。具体做法为：首先接收摄像头发送的格式为YUV420SP的图片，对该图片转为BGR格式，转换后如图所示。",{"type":17,"tag":25,"props":187,"children":188},{},[189],{"type":17,"tag":34,"props":190,"children":192},{"alt":7,"src":191},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/abce2f0b27194c6691df5ac0ccfdf9ca.png",[],{"type":17,"tag":25,"props":194,"children":195},{},[196],{"type":23,"value":197},"文字检测流程",{"type":17,"tag":25,"props":199,"children":200},{},[201],{"type":17,"tag":34,"props":202,"children":204},{"alt":7,"src":203},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/cefdd029bbec4e86812ccf534e0cb96c.png",[],{"type":17,"tag":25,"props":206,"children":207},{},[208],{"type":23,"value":209},"(a) 原图",{"type":17,"tag":25,"props":211,"children":212},{},[213],{"type":17,"tag":34,"props":214,"children":216},{"alt":7,"src":215},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/8deb8d013a3d47ef910961f6b2007e84.png",[],{"type":17,"tag":25,"props":218,"children":219},{},[220],{"type":23,"value":221},"(b) hsv图",{"type":17,"tag":25,"props":223,"children":224},{},[225],{"type":17,"tag":34,"props":226,"children":228},{"alt":7,"src":227},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/6cf50bf6559d4ad089c1f9c3fe25a0b4.png",[],{"type":17,"tag":25,"props":230,"children":231},{},[232],{"type":23,"value":233},"(c) 轮廓提取图",{"type":17,"tag":25,"props":235,"children":236},{},[237],{"type":17,"tag":34,"props":238,"children":240},{"alt":7,"src":239},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/d0597d90dbf84cd1bfdff3dbe382abf4.png",[],{"type":17,"tag":25,"props":242,"children":243},{},[244],{"type":23,"value":245},"(d) 轮廓膨胀图",{"type":17,"tag":25,"props":247,"children":248},{},[249],{"type":17,"tag":34,"props":250,"children":252},{"alt":7,"src":251},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/5de84875e1d746138ba61a3e0731f38b.png",[],{"type":17,"tag":25,"props":254,"children":255},{},[256],{"type":23,"value":257},"(e) 区域提取图",{"type":17,"tag":25,"props":259,"children":260},{},[261],{"type":23,"value":262},"单字检测关键过程的示意图",{"type":17,"tag":25,"props":264,"children":265},{},[266],{"type":23,"value":267},"接着使用H(170,180)，S(100,255), V(100,255)，做为颜色阈值对图像进行颜色提取，结果如图(b)和图(c)所示。随后使用OpenCV的膨胀方法对提取的文本区域进行膨胀处理，以便于更明显的区分文本区域和背景，结果如图(d)所示。接下来在膨胀后的图像上提取轮廓，并针对该轮廓求最小水平矩形。考虑到存在可能的误差区域以及一个字分成多个区域，我们使用轮廓间的相对距离（即轮廓间距离/图像对角线距离）进行是否属于同一区域的判断，具体做法是设定距离阈值，计算两两轮廓间距离除以图像对角线距离得到相对距离，该距离小于距离阈值时，属于同一区域，该距离大于等于距离阈值时，属于不同区域，最后使用交并集算法进行区域合并。然后设定面积阈值，计算合并后的每个水平矩形的面积，并除以图像面积得到相对面积，当相对面积在距离阈值区间时，判定为文字区域，否则，舍弃，最后返回我们标定的文字区域坐标范围。最后的提取区域如图(c)所示。 3.5文字识别 本节按照模型定义、上传数据、模型训练、过程展示、训练结果等多个方面展开。 3.5.1模型定义 文字识别部分采用ResNet-18来完成模型的学习与推断，它们的作用是用来对检测出来的文字进行分类，如图所示。",{"type":17,"tag":25,"props":269,"children":270},{},[271],{"type":17,"tag":34,"props":272,"children":274},{"alt":7,"src":273},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/2d939bb02d0348368e778e61ad032990.png",[],{"type":17,"tag":25,"props":276,"children":277},{},[278],{"type":23,"value":279},"ResNet-18网络结构图",{"type":17,"tag":25,"props":281,"children":282},{},[283],{"type":23,"value":284},"3.5.2上传数据和脚本 Step1:选择华为云服务中的对象存储服务OBS Step2:点击“创建桶”，根据需要选择不同计费标准",{"type":17,"tag":25,"props":286,"children":287},{},[288],{"type":17,"tag":34,"props":289,"children":291},{"alt":7,"src":290},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/94d479fd34e745c6aae054dddef6ecc6.png",[],{"type":17,"tag":25,"props":293,"children":294},{},[295,297],{"type":23,"value":296},"Step3:OBS授权，参考链接: ",{"type":17,"tag":298,"props":299,"children":303},"a",{"href":300,"rel":301},"https://bbs.huaweicloud.com/videos/101366",[302],"nofollow",[304],{"type":23,"value":300},{"type":17,"tag":25,"props":306,"children":307},{},[308],{"type":23,"value":309},"3.5.3模型训练 Step1:选择华为云服务中的ModelArts，使用ModelArts控制台页面的训练作业功能（或者使用ModelArts Pycharm Tookit提供的训练作业功能）。 Step2:设置框架、代码目录、启动文件、数据存储位置和单卡多卡模式等参数，然后启动训练作业。",{"type":17,"tag":25,"props":311,"children":312},{},[313],{"type":17,"tag":34,"props":314,"children":316},{"alt":7,"src":315},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/aadd5809d98a4c189e8e6672daec41e7.png",[],{"type":17,"tag":25,"props":318,"children":319},{},[320],{"type":23,"value":321},"3.5.4过程展示 在训练作业中打开作业，选择日志模块进行查看。若在本地配置了MindSpore，还可通过MindInsight可视化训练过程。",{"type":17,"tag":25,"props":323,"children":324},{},[325],{"type":17,"tag":34,"props":326,"children":328},{"alt":7,"src":327},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/24eb4e83a3c347a4aee3250a7054aed3.png",[],{"type":17,"tag":25,"props":330,"children":331},{},[332],{"type":23,"value":333},"ModelArts训练过程展示1",{"type":17,"tag":25,"props":335,"children":336},{},[337],{"type":17,"tag":34,"props":338,"children":340},{"alt":7,"src":339},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/27a592f937b94b88abe873b2dfe2588a.png",[],{"type":17,"tag":25,"props":342,"children":343},{},[344],{"type":23,"value":345},"ModelArts训练过程展示2",{"type":17,"tag":25,"props":347,"children":348},{},[349],{"type":23,"value":350},"3.5.5训练结果 使用相同数据集，和TensorFlow进行精度和训练时长的对比如下图所示。",{"type":17,"tag":25,"props":352,"children":353},{},[354],{"type":17,"tag":34,"props":355,"children":357},{"alt":7,"src":356},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/23c70729a4ec4771a7f9e0dd0ab8e987.png",[],{"type":17,"tag":25,"props":359,"children":360},{},[361],{"type":23,"value":362},"与TensorFlow的精度和训练时长对比(TensorFlow使用GPU为TITAN X, MindSpore为V100)",{"type":17,"tag":25,"props":364,"children":365},{},[366],{"type":23,"value":367},"网络训练完成后，可进一步导出为AIR（GEIR）或ONNX格式的PB模型，以便后续部署到Atlas或其它平台上进行推理。",{"type":17,"tag":369,"props":370,"children":372},"pre",{"code":371},"input = np.random.uniform(0.0, 1.0, size=[1, 3, 112, 112]).astype(np.float32)\nexport(net, Tensor(input), file_name='/cache/ckpt/resnet.air', file_format='AIR')\n",[373],{"type":17,"tag":374,"props":375,"children":376},"code",{"__ignoreMap":7},[377],{"type":23,"value":371},{"type":17,"tag":25,"props":379,"children":380},{},[381],{"type":23,"value":382},"3.6模型转换 要将训练好的MindSpore模型部署到Atlas上，首先要将其转换为Ascend 310 AI处理器支持的离线模型。使用ATC模型转换工具进行模型转换，转换代码如下。参数说明请查看ATC工具文档。",{"type":17,"tag":25,"props":384,"children":385},{},[386],{"type":17,"tag":34,"props":387,"children":389},{"alt":7,"src":388},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/67e76d9b88dc4ead8ccb8397444fab18.png",[],{"type":17,"tag":25,"props":391,"children":392},{},[393],{"type":23,"value":394},"模型转换时的关键配置",{"type":17,"tag":25,"props":396,"children":397},{},[398],{"type":23,"value":399},"3.7推断时的数据结构设计 模型部署模块数据结构设计参考了目标检测的数据类型，汉字识别在其基础上添加了如下数据类型：",{"type":17,"tag":369,"props":401,"children":403},{"code":402},"// 每个汉字的矩形框\nstruct CRect {\nhiai::Point2D lt; // left top\nhiai::Point2D rb; // right bottom\n};\n// 每张图片的检测与识别结果\nstruct ImageResults{\nint num;// 每帧图像中的汉字数量\nstd::vector OutputT output_datas;//每帧图像中汉字输出向量集合\nstd::vector ;//每帧图像中汉字的矩形框集合\n};\n// 多帧图像检测与识别结果集合\nstruct CEngineTransT{\nbool status;\nstd::string msg; // error message\nhiai::BatchInfo b_info;\nstd::vector  imgss;//每帧图像集合\nstd::vector  results;//每帧识别结果集合\n};\n",[404],{"type":17,"tag":374,"props":405,"children":406},{"__ignoreMap":7},[407],{"type":23,"value":402},{"type":17,"tag":25,"props":409,"children":410},{},[411],{"type":23,"value":412},"3.8部署流程设计 根据汉字检测与识别的需求，共设计了三个引擎模块，分别为摄像头模块、推理模块、后处理模块，部署流程图如图所示。摄像头模块与Camera驱动进行交互，设置摄像头的帧率、图像分辨率、图像格式等相关参数，从摄像头中获取YUV420SP格式的视频数据，每一帧传给推理引擎进行计算。以此工程为例，其中帧率fps为5，图像分辨率取1280x720，摄像头图像格式为默认的YUV420SP。推理模块接收摄像头数据，对YUV420SP格式的每帧图像进行以下两方面的处理：一方面将其转为RGB格式的图像，使用OpenCV对图像进行处理，检测出汉字的矩形框集合，接下来依次对每个汉字子图像通过模型进行推理，得到输出向量的结果集合；另一方面还需将每帧图像转换JPEG格式，以便于查看摄像头图像。将JPEG格式的每帧图像集合和每帧识别结果集合作为输入传给后处理引擎模块。后处理模块接收上一个引擎的推理结果与摄像头JPEG图像，将矩形框集合添加到Presenter Server记录检测目标位置信息的数据结构DetectionResult类中，作为摄像头图像的检测结果，通过调用Presenter Agent的API发送到UI Host上部署的Presenter Server服务进程。Presenter Server根据接收到的推理结果，求出汉字最大预测概率值所对应的索引，在索引表中查找对应汉字，在JPEG图像上进行汉字矩形框位置及汉字识别结果的标记，并将图像信息发送给Web UI。索引表为一个记录汉字与其对应索引值的表，为txt文件，在Ubuntu系统下以UTF-8的格式存储，其中每一行对应一个汉字。",{"type":17,"tag":25,"props":414,"children":415},{},[416],{"type":17,"tag":34,"props":417,"children":419},{"alt":7,"src":418},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/31159c30b89c4876b85720a1bc2ac736.jpg",[],{"type":17,"tag":25,"props":421,"children":422},{},[423],{"type":23,"value":424},"四、最终效果展示 最后，我们针对拍照识别的实际场景，进行了测试。硬件布局图如下图所示。",{"type":17,"tag":25,"props":426,"children":427},{},[428],{"type":17,"tag":34,"props":429,"children":431},{"alt":7,"src":430},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/9a11c8e74313403da8c39c8f5f89de39.jpg",[],{"type":17,"tag":25,"props":433,"children":434},{},[435],{"type":23,"value":436},"Atlas与摄像头布局",{"type":17,"tag":25,"props":438,"children":439},{},[440],{"type":23,"value":441},"最后，我们测算了系统的主要时间消耗情况。一帧图片的整图字符检测约60毫秒，识别阶段每个字的平均识别时间约为3毫秒。在光线稳定的情况下，单字识别准确率90%以上。",{"type":17,"tag":25,"props":443,"children":444},{},[445],{"type":17,"tag":34,"props":446,"children":448},{"alt":7,"src":447},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2020/10/16/888e33a223304135a194cfd3b093c91b.png",[],{"type":17,"tag":25,"props":450,"children":451},{},[452],{"type":23,"value":453},"五、后续可扩展性 本项目聚焦于少量手写汉字的识别任务。后续可以扩展到包含复杂背景的大量手写汉字识别场景，比如手写作文文字的检测与识别等任务。",{"title":7,"searchDepth":455,"depth":455,"links":456},4,[],"markdown","content:news:zh:268.md","content","news/zh/268.md","news/zh/268","md",1776506070448]