[{"data":1,"prerenderedAt":777},["ShallowReactive",2],{"content-query-N94mDeTGbV":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":771,"_id":772,"_source":773,"_file":774,"_stem":775,"_extension":776},"/technology-blogs/zh/2560","zh",false,"","数据加载没烦恼，报错问题无处逃~|昇思MindSpore数据处理FAQ","Q1: 请问如果不使用高阶API，怎么实现数据下沉？","2023-06-09","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2023/06/15/069f7959e58d4c89a05c62bd7972f6e6.png","technology-blogs","实践",{"type":15,"children":16,"toc":768},"root",[17,25,30,46,51,56,61,66,71,76,81,86,91,96,101,106,111,116,121,126,136,141,146,151,156,161,166,171,176,181,186,191,199,204,209,214,219,224,229,234,239,253,258,272,277,282,310,315,320,328,333,338,343,348,353,358,371,376,381,386,391,396,401,409,414,419,434,439,444,449,464,472,477,485,490,495,503,508,516,521,529,534,547,552,557,565,570,575,580,585,590,595,600,608,613,621,626,634,639,644,649,654,659,667,672,680,685,690,695,703,708,713,721,726,731,739,751,756,764],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"数据加载没烦恼报错问题无处逃昇思mindspore数据处理faq",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":28},"p",{},[29],{"type":24,"value":9},{"type":18,"tag":26,"props":31,"children":32},{},[33,35,44],{"type":24,"value":34},"A1: 可以参考此手动下沉方式的",{"type":18,"tag":36,"props":37,"children":41},"a",{"href":38,"rel":39},"https://gitee.com/mindspore/mindspore/blob/master/tests/st/data_transfer/test_tdt_data_transfer.py",[40],"nofollow",[42],{"type":24,"value":43},"test_tdt_data_transfer.py",{"type":24,"value":45},"示例实现，不用借助model.train接口，目前支持：GPU和Ascend硬件使用。",{"type":18,"tag":26,"props":47,"children":48},{},[49],{"type":24,"value":50},"Q2: 在使用Dataset处理数据过程中内存占用高，怎么优化？",{"type":18,"tag":26,"props":52,"children":53},{},[54],{"type":24,"value":55},"A2: 可以参考如下几个步骤来降低内存占用，同时也可能会降低数据处理的效率。",{"type":18,"tag":26,"props":57,"children":58},{},[59],{"type":24,"value":60},"1.在定义数据集**Dataset对象前，设置Dataset数据处理预取的大小，ds.config.set_prefetch_size(2)。",{"type":18,"tag":26,"props":62,"children":63},{},[64],{"type":24,"value":65},"2.在定义**Dataset对象时，设置其参数num_parallel_workers为1。",{"type":18,"tag":26,"props":67,"children":68},{},[69],{"type":24,"value":70},"3.如果对**Dataset对象进一步使用了.map(...)操作，可以设置.map(...)的参数num_parallel_workers为1。",{"type":18,"tag":26,"props":72,"children":73},{},[74],{"type":24,"value":75},"4.如果对**Dataset对象进一步使用了.batch(...)操作，可以设置.batch(...)的参数num_parallel_workers为1。",{"type":18,"tag":26,"props":77,"children":78},{},[79],{"type":24,"value":80},"5.如果对**Dataset对象进一步使用了.shuffle(...)操作，可以把参数buffer_size设置减少。",{"type":18,"tag":26,"props":82,"children":83},{},[84],{"type":24,"value":85},"Q3: 在使用Dataset处理数据过程中CPU占用高，表现为sy占用高而us占用低，怎么优化？",{"type":18,"tag":26,"props":87,"children":88},{},[89],{"type":24,"value":90},"A3: 可以参考如下几个步骤来降低CPU占用，进一步提升性能，其主要原因是三方库多线程与数据处理多线程存在资源竞争。",{"type":18,"tag":26,"props":92,"children":93},{},[94],{"type":24,"value":95},"1.如果数据处理阶段有opencv的cv2操作，那么通过cv2.setNumThreads(2)设置cv2全局线程数。",{"type":18,"tag":26,"props":97,"children":98},{},[99],{"type":24,"value":100},"2.如果数据处理阶段有numpy操作，那么通过export OPENBLAS_NUM_THREADS=1设置OPENBLAS线程数。",{"type":18,"tag":26,"props":102,"children":103},{},[104],{"type":24,"value":105},"3.如果数据处理阶段有numba操作，那么通过numba.set_num_threads(1)设置并行度来减少线程竞争。",{"type":18,"tag":26,"props":107,"children":108},{},[109],{"type":24,"value":110},"Q4: 在GeneratorDataset中，看到有参数shuffle，在跑任务时发现shuffle=True和shuffle=False，两者没有区别，这是为什么？",{"type":18,"tag":26,"props":112,"children":113},{},[114],{"type":24,"value":115},"A4: 开启shuffle,需要传入的Dataset是支持随机访问的（例如自定义的Dataset有getitem方法），如果是在自定义的Dataset里面通过yeild方式返回回来的数据，是不支持随机访问的，具体可查看教程中的自定义数据集章节。",{"type":18,"tag":26,"props":117,"children":118},{},[119],{"type":24,"value":120},"Q5: 请问Dataset如何把两个columns合并成一个column？",{"type":18,"tag":26,"props":122,"children":123},{},[124],{"type":24,"value":125},"A5: 可以添加如下操作把 两个字段合成一个。",{"type":18,"tag":127,"props":128,"children":130},"pre",{"code":129},"def combine(x, y):\n\n    x = x.flatten()\n\n    y = y.flatten()\n\n    return np.append(x, y)\n\ndataset = dataset.map(operations=combine, input_columns=[\"data\", \"data2\"], output_columns=[\"data\"])\n",[131],{"type":18,"tag":132,"props":133,"children":134},"code",{"__ignoreMap":7},[135],{"type":24,"value":129},{"type":18,"tag":26,"props":137,"children":138},{},[139],{"type":24,"value":140},"注：因为两个columns是不同的shape，需要先flatten下，然后再合并。",{"type":18,"tag":26,"props":142,"children":143},{},[144],{"type":24,"value":145},"Q6: 请问GeneratorDataset支持ds.PKSampler采样吗？",{"type":18,"tag":26,"props":147,"children":148},{},[149],{"type":24,"value":150},"A6: 自定义数据集GeneratorDataset不支持PKSampler采样逻辑。主要原因是自定义数据操作灵活度太大了，内置的PKSampler难以做到通用性，所以选择在接口层面直接提示不支持。但是对于GeneratorDataset，可以方便的定义自己需要的Sampler逻辑，即在ImageDataset类的__getitem__函数中定义具体的sampler规则，返回自己需要的数据即可。",{"type":18,"tag":26,"props":152,"children":153},{},[154],{"type":24,"value":155},"Q7: MindSpore如何加载已有的预训练词向量？",{"type":18,"tag":26,"props":157,"children":158},{},[159],{"type":24,"value":160},"A7: 可以在定义EmbedingLookup或者Embedding时候，把预训练的词向量传进来，把预训练的词向量封装成一个Tensor作为EmbeddingLookup初始值。",{"type":18,"tag":26,"props":162,"children":163},{},[164],{"type":24,"value":165},"Q8: 请问c_transforms和py_transforms有什么区别，比较推荐使用哪个？",{"type":18,"tag":26,"props":167,"children":168},{},[169],{"type":24,"value":170},"A8: 推荐使用c_transforms，因为纯C层执行，所以性能会更好。",{"type":18,"tag":26,"props":172,"children":173},{},[174],{"type":24,"value":175},"原理:c_transform底层使用的是C版本opencv/jpeg-turbo进行的数据处理，py_transform使用的是Python版本的Pillow进行数据处理。",{"type":18,"tag":26,"props":177,"children":178},{},[179],{"type":24,"value":180},"在MindSpore1.8开始，数据增强API进行了合并，用户无需显式感知c_transforms和py_transforms，MindSpore将根据传入数据增强API的数据类型决定使用何种后端，默认使用c_transforms，因其性能更佳。详细可以参考最新API文档与import说明。",{"type":18,"tag":26,"props":182,"children":183},{},[184],{"type":24,"value":185},"Q9: 由于我一条数据包含多个图像，并且每个图像的宽高都不一致，需要对转成mindrecord格式的数据进行map操作。可是我从record读取的数据是np.ndarray格式的数据，我的数据处理的operations是针对图像格式的。我应该怎么样才能对所生成的mindrecord的格式的数据进行预处理呢？",{"type":18,"tag":26,"props":187,"children":188},{},[189],{"type":24,"value":190},"A9: 建议你按照如下操作进行:",{"type":18,"tag":127,"props":192,"children":194},{"code":193},"#1 The defined schema is as follows: Among them, data1, data2, data3, ... These fields store your image, and only the binary of the image is stored here.\n\ncv_schema_json = {\"label\": {\"type\": \"int32\"}, \"data1\": {\"type\": \"bytes\"}, \"data2\": {\"type\": \"bytes\"}, \"data3\": {\"type\": \"bytes\"}}\n\n#2 The organized data can be as follows, and then this data_list can be written by FileWriter.write_raw_data(...).\n\ndata_list = []data = {}data['label'] = 1\n\nf = open(\"1.jpg\", \"rb\")image_bytes = f.read()f.close\n\ndata['data1'] = image_bytes\n\nf2 = open(\"2.jpg\", \"rb\")image_bytes2 = f2.read()f2.close\n\ndata['data2'] = image_bytes2\n\nf3 = open(\"3.jpg\", \"rb\")image_bytes3 = f3.read()f3.close\n\ndata['data3'] = image_bytes3\n\ndata_list.append(data)\n\n#3 Use MindDataset to load, then use the decode operation we provide to decode, and then perform subsequent processing.\n\ndata_set = ds.MindDataset(\"mindrecord_file_name\")data_set = data_set.map(input_columns=[\"data1\"], operations=vision.Decode(), num_parallel_workers=2)data_set = data_set.map(input_columns=[\"data2\"], operations=vision.Decode(), num_parallel_workers=2)data_set = data_set.map(input_columns=[\"data3\"], operations=vision.Decode(), num_parallel_workers=2)resize_op = vision.Resize((32, 32), interpolation=Inter.LINEAR)data_set = data_set.map(operations=resize_op, input_columns=[\"data1\"], num_parallel_workers=2)for item in data_set.create_dict_iterator(output_numpy=True):\n\n    print(item)\n",[195],{"type":18,"tag":132,"props":196,"children":197},{"__ignoreMap":7},[198],{"type":24,"value":193},{"type":18,"tag":26,"props":200,"children":201},{},[202],{"type":24,"value":203},"Q10: 我的自定义图像数据集转为mindrecord格式时，我的数据是numpy.ndarray格式的，且shape为[4,100,132,3]，这个shape的含义是四幅三通道的帧，且每个值都在0~255。可是当我查看转化成mindrecord的格式的数据时，发现是[19800]的shape，我原数据的维度全部展开有[158400]，请问这是为什么？",{"type":18,"tag":26,"props":205,"children":206},{},[207],{"type":24,"value":208},"A10: 可能是你数据中ndarray的dtype是int8，因为[158400]和[19800]刚好相差了8倍，建议将数据中ndarray的dtype指定为float64。",{"type":18,"tag":26,"props":210,"children":211},{},[212],{"type":24,"value":213},"Q11: 想要保存生成的图片，代码运行完毕以后在相应目录找不到图片。相似的，在JupyterLab中生成数据集用于训练，训练时可以在相应路径读取到数据，但是自己却无法在路径中找到图片或数据集？",{"type":18,"tag":26,"props":215,"children":216},{},[217],{"type":24,"value":218},"A11: 可能是JumperLab生成的图片或者数据集都是在Docker内，moxing下载的数据只能训练进程的Docker内看见，训练完成后这些数据就随着Docker释放了。 可以试试在训练任务中将需要download的数据再通过moxing传回obs，然后再在obs里面下载到你本地。",{"type":18,"tag":26,"props":220,"children":221},{},[222],{"type":24,"value":223},"Q12: MindSpore中model.train的dataset_sink_mode参数该如何理解？",{"type":18,"tag":26,"props":225,"children":226},{},[227],{"type":24,"value":228},"A12: 当dataset_sink_mode=True时，数据处理会和网络计算构成Pipeline方式，即: 数据处理在逐步处理数据时，处理完一个batch的数据，会把数据放到一个队列里，这个队列用于缓存已经处理好的数据，然后网络计算从这个队列里面取数据用于训练，那么此时数据处理与网络计算就Pipeline起来了，整个训练耗时就是数据处理/网络计算耗时最长的那个。",{"type":18,"tag":26,"props":230,"children":231},{},[232],{"type":24,"value":233},"当dataset_sink_mode=False时，数据处理会和网络计算构成串行的过程，即: 数据处理在处理完一个batch后，把这个batch的数据传递给网络用于计算，在计算完成后，数据处理再处理下一个batch，然后把这个新的batch数据传递给网络用于计算，如此的循环往复，直到训练完。该方法的总耗时是数据处理的耗时+网络计算的耗时=训练总耗时。",{"type":18,"tag":26,"props":235,"children":236},{},[237],{"type":24,"value":238},"Q13: MindSpore能否支持按批次对不同尺寸的图片数据进行训练？",{"type":18,"tag":26,"props":240,"children":241},{},[242,244,251],{"type":24,"value":243},"A13: 你可以参考yolov3对于此场景的使用，里面有对于图像的不同缩放,脚本见",{"type":18,"tag":36,"props":245,"children":248},{"href":246,"rel":247},"https://gitee.com/mindspore/models/blob/master/official/cv/YOLOv3/src/yolo_dataset.py",[40],[249],{"type":24,"value":250},"yolo_dataset",{"type":24,"value":252},"。",{"type":18,"tag":26,"props":254,"children":255},{},[256],{"type":24,"value":257},"Q14: 使用MindSpore做分割训练，必须将数据转为MindRecord吗？",{"type":18,"tag":26,"props":259,"children":260},{},[261,263,270],{"type":24,"value":262},"A14: ",{"type":18,"tag":36,"props":264,"children":267},{"href":265,"rel":266},"https://gitee.com/mindspore/models/blob/master/research/cv/FCN8s/src/data/build_seg_data.py",[40],[268],{"type":24,"value":269},"build_seg_data.py",{"type":24,"value":271},"是将数据集生成MindRecord的脚本，可以直接使用/适配下你的数据集。或者如果你想尝试自己实现数据集的读取，可以使用GeneratorDataset自定义数据集加载。",{"type":18,"tag":26,"props":273,"children":274},{},[275],{"type":24,"value":276},"GenratorDataset 示例",{"type":18,"tag":26,"props":278,"children":279},{},[280],{"type":24,"value":281},"GenratorDataset API说明",{"type":18,"tag":26,"props":283,"children":284},{},[285],{"type":18,"tag":286,"props":287,"children":288},"strong",{},[289],{"type":18,"tag":286,"props":290,"children":291},{},[292,294,308],{"type":24,"value":293},"Q",{"type":18,"tag":286,"props":295,"children":296},{},[297],{"type":18,"tag":286,"props":298,"children":299},{},[300],{"type":18,"tag":286,"props":301,"children":302},{},[303],{"type":18,"tag":286,"props":304,"children":305},{},[306],{"type":24,"value":307},"15",{"type":24,"value":309},": MindSpore在Ascend硬件平台进行多卡训练，自定义数据集如何给不同卡传递不同数据？",{"type":18,"tag":26,"props":311,"children":312},{},[313],{"type":24,"value":314},"A15: 使用GeneratorDataset的时候，可以使用num_shards=num_shards,shard_id=device_id参数来控制不同卡读取哪个分片的数据，__getitem__和__len__按全量数据集处理即可。",{"type":18,"tag":26,"props":316,"children":317},{},[318],{"type":24,"value":319},"举例:",{"type":18,"tag":127,"props":321,"children":323},{"code":322},"# 卡0:ds.GeneratorDataset(..., num_shards=8, shard_id=0, ...)# 卡1:ds.GeneratorDataset(..., num_shards=8, shard_id=1, ...)# 卡2:ds.GeneratorDataset(..., num_shards=8, shard_id=2, ...)...# 卡7:ds.GeneratorDataset(..., num_shards=8, shard_id=7, ...)\n",[324],{"type":18,"tag":132,"props":325,"children":326},{"__ignoreMap":7},[327],{"type":24,"value":322},{"type":18,"tag":26,"props":329,"children":330},{},[331],{"type":24,"value":332},"Q16: 如何构建图像的多标签MindRecord格式数据集？",{"type":18,"tag":26,"props":334,"children":335},{},[336],{"type":24,"value":337},"A16: 数据Schema可以按如下方式定义: cv_schema_json = {\"label\": {\"type\": \"int32\", \"shape\": [-1]}, \"data\": {\"type\": \"bytes\"}}",{"type":18,"tag":26,"props":339,"children":340},{},[341],{"type":24,"value":342},"说明: label是一个数组，numpy类型，这里面可以存 1， 1，0，1， 0， 1 这么多label值，这些label值对应同一个data，即: 同一个图像的二进制值。 可以参考将数据集转换为MindRecord教程。",{"type":18,"tag":26,"props":344,"children":345},{},[346],{"type":24,"value":347},"Q17: 请问自己制作的黑底白字28*28的数字图片，使用MindSpore训练出来的模型做预测，报错提示wrong shape of image是怎么回事？",{"type":18,"tag":26,"props":349,"children":350},{},[351],{"type":24,"value":352},"A17: 首先MindSpore训练使用的灰度图MNIST数据集。所以模型使用时对数据是有要求的，需要设置为28*28的灰度图，就是单通道才可以。",{"type":18,"tag":26,"props":354,"children":355},{},[356],{"type":24,"value":357},"Q18: MindSpore设计了专门用于数据处理的框架，有相关的设计和用法介绍？",{"type":18,"tag":26,"props":359,"children":360},{},[361,363,370],{"type":24,"value":362},"A18: MindSpore Dataset模块使得用户很简便地定义数据预处理Pipeline，并以高效（多进程/多线程）的方式处理数据集中样本，同时MindSpore Dataset也提供了多样化的API加载和处理数据集，详细介绍请参阅数据处理Pipeline介绍。如果想进一步对数据处理Pipeline进行性能调优，请参阅",{"type":18,"tag":36,"props":364,"children":367},{"href":365,"rel":366},"https://www.mindspore.cn/tutorials/experts/zh-CN/master/dataset/optimize.html",[40],[368],{"type":24,"value":369},"数据处理性能优化",{"type":24,"value":252},{"type":18,"tag":26,"props":372,"children":373},{},[374],{"type":24,"value":375},"Q19: 网络训练时出现报错提示数据下发失败“TDT Push data into device Failed”，如何定位原因？",{"type":18,"tag":26,"props":377,"children":378},{},[379],{"type":24,"value":380},"A19: 首先上述报错指的是通过训练数据下发通道（TDT，train data transfer)发送数据到卡（device）上失败，导致这一报错的原因可能有多种，因此日志中给出了相应的检查建议，具体而言:",{"type":18,"tag":26,"props":382,"children":383},{},[384],{"type":24,"value":385},"1.通常我们会找到日志中最先抛出的错误（第一个ERROR级别的错误）或报错堆栈（TraceBack)，并尝试从中找到有助于定位错误原因的信息。",{"type":18,"tag":26,"props":387,"children":388},{},[389],{"type":24,"value":390},"2.在图编译阶段，训练还没开始报错时（例如日志中还没打印loss)，请先检查下报错（ERROR）日志中是否有网络中涉及的相关算子报错或涉及环境没配置好导致的报错（如hccl.json不对导致多卡通信初始化异常）。",{"type":18,"tag":26,"props":392,"children":393},{},[394],{"type":24,"value":395},"3.在中间训练过程中报错时，通常为下发的数据量（batch数）与网络训练需要的数据量（step数）不匹配导致的，可以通过get_dataset_size接口打印一个epoch中包含的batch数，导致异常的部分可能原因如下：",{"type":18,"tag":26,"props":397,"children":398},{},[399],{"type":24,"value":400},"1）通过查看打印loss次数的等方式判断如果数据量（step数）刚好为一个epoch中batch数的整数倍，则可能是数据处理部分涉及epoch的处理存在问题，如下面这场景:",{"type":18,"tag":127,"props":402,"children":404},{"code":403},"...dataset = dataset.create_tuple_iteator(num_epochs=-1) # 此处如果要返回一个迭代器则num_epochs应该给1, 但建议直接返回datasetreturn dataset\n",[405],{"type":18,"tag":132,"props":406,"children":407},{"__ignoreMap":7},[408],{"type":24,"value":403},{"type":18,"tag":26,"props":410,"children":411},{},[412],{"type":24,"value":413},"2）考虑是否是数据处理性能较慢，跟不上网络训练的速度，针对这一场景，可借助profiler工具和MindSpore Insight看一下是否存在明显的迭代间隙，或手动遍历一下dataset，并打印计算下平均单batch的耗时，是否比网络正反向加起来的时间更长，如果是则大概率需要对数据处理部分进行性能优化。",{"type":18,"tag":26,"props":415,"children":416},{},[417],{"type":24,"value":418},"3）训练过程中出现异常数据抛出异常导致下发数据失败，通常这种情况会有其他报错（ERROR）日志会提示数据处理哪个环节出现了异常及检查建议。如果不明显，也可以通过遍历dataset每条数据的方式尝试找出异常的数据（如关闭shuffle, 然后进行二分法）。",{"type":18,"tag":26,"props":420,"children":421},{},[422,424,432],{"type":24,"value":423},"4.如果",{"type":18,"tag":286,"props":425,"children":426},{},[427],{"type":18,"tag":286,"props":428,"children":429},{},[430],{"type":24,"value":431},"在训练结束后",{"type":24,"value":433},"打印这条日志（大抵是强制释放资源导致），可忽略这个报错。",{"type":18,"tag":26,"props":435,"children":436},{},[437],{"type":24,"value":438},"5.如果仍不能定位具体原因，请通过提issue或论坛提问等方式找模块开发人员协助定位。",{"type":18,"tag":26,"props":440,"children":441},{},[442],{"type":24,"value":443},"Q20: py_transforms 和 c_transforms 增强操作能否混合使用，如果混合使用具体需要怎么使用？",{"type":18,"tag":26,"props":445,"children":446},{},[447],{"type":24,"value":448},"A20: 出于高性能考虑，通常不建议将py_transforms 与 c_transforms增强操作混合使用，但若不追求极致的性能，主要考虑打通流程，在无法全部使用c_transforms增强模块（缺少对应的c_transforms增强操作）的情况下，可使用py_transforms模块中的增强操作替代，此时即存在混合使用。 对此我们需要注意c_transforms 增强模块的输出通常是numpy array，py_transforms增强模块的输出是PIL Image，具体可查看对应的模块说明，为此通常的混合使用方法为：",{"type":18,"tag":450,"props":451,"children":452},"ol",{},[453,459],{"type":18,"tag":454,"props":455,"children":456},"li",{},[457],{"type":24,"value":458},"c_transforms 增强操作 + ToPIL操作 + py_transforms 增强操作 + ToNumpy操作",{"type":18,"tag":454,"props":460,"children":461},{},[462],{"type":24,"value":463},"py_transforms 增强操作 + ToNumpy操作 + c_transforms 增强操作",{"type":18,"tag":127,"props":465,"children":467},{"code":466},"# example that using c_transforms and py_transforms operations together# in following case: c_vision refers to c_transforms, py_vision refer to py_transformsimport mindspore.vision.c_transforms as c_visionimport mindspore.vision.py_transforms as py_vision\n\ndecode_op = c_vision.Decode()\n\n# If input type is not PIL, then add ToPIL operation.transforms = [\n\n    py_vision.ToPIL(),\n\n    py_vision.CenterCrop(375),\n\n    py_vision.ToTensor()]transform = mindspore.dataset.transforms.Compose(transforms)data1 = data1.map(operations=decode_op, input_columns=[\"image\"])data1 = data1.map(operations=transform, input_columns=[\"image\"])\n",[468],{"type":18,"tag":132,"props":469,"children":470},{"__ignoreMap":7},[471],{"type":24,"value":466},{"type":18,"tag":26,"props":473,"children":474},{},[475],{"type":24,"value":476},"在MindSpore1.8之后，由于数据增强API的合并，写作上会更简洁，如：",{"type":18,"tag":127,"props":478,"children":480},{"code":479},"import mindspore.vision as vision\n\ntransforms = [\n\n    vision.Decode(),         # c_transforms 数据增强\n\n    vision.ToPIL(),          # 切换下一个增强输入为PIL\n\n    vision.CenterCrop(375),  # py_transforms 数据增强]\n\ndata1 = data1.map(operations=transforms, input_columns=[\"image\"])\n",[481],{"type":18,"tag":132,"props":482,"children":483},{"__ignoreMap":7},[484],{"type":24,"value":479},{"type":18,"tag":26,"props":486,"children":487},{},[488],{"type":24,"value":489},"Q21: 当错误提示 “The data pipeline is not a tree (i.e., one node has 2 consumers)” 应该怎么检查？",{"type":18,"tag":26,"props":491,"children":492},{},[493],{"type":24,"value":494},"A21: 上述错误通常是脚本书写错误导致。正常情况下数据处理pipeline中的操作是依次串联的，如下列定义：",{"type":18,"tag":127,"props":496,"children":498},{"code":497},"# pipeline结构：# dataset1 -> map -> shuffle -> batchdataset1 = XXDataset()dataset1 = dataset1.map(...)dataset1 = dataset1.shuffle(...)dataset1 = dataset1.batch(...)\n",[499],{"type":18,"tag":132,"props":500,"children":501},{"__ignoreMap":7},[502],{"type":24,"value":497},{"type":18,"tag":26,"props":504,"children":505},{},[506],{"type":24,"value":507},"然而在下列异常场景中，假如dataset1有两个分叉节点，即dataset2和dataset3，就会出现上述错误。 因为dataset1节点产生了分支，其数据流向是未定义的，所以不允许出现此种情况。",{"type":18,"tag":127,"props":509,"children":511},{"code":510},"# pipeline结构：# dataset1 -> dataset2 -> map#          |#          --> dataset3 -> mapdataset1 = XXDataset()dataset2 = dataset1.map(***)dataset3 = dataset1.map(***)\n",[512],{"type":18,"tag":132,"props":513,"children":514},{"__ignoreMap":7},[515],{"type":24,"value":510},{"type":18,"tag":26,"props":517,"children":518},{},[519],{"type":24,"value":520},"正确的写法如下所示，dataset3是由dataset2进性数据增强得到的，而不是在dataset1基础上进行数据增强操作得到。",{"type":18,"tag":127,"props":522,"children":524},{"code":523},"dataset2 = dataset1.map(***)dataset3 = dataset2.map(***)\n",[525],{"type":18,"tag":132,"props":526,"children":527},{"__ignoreMap":7},[528],{"type":24,"value":523},{"type":18,"tag":26,"props":530,"children":531},{},[532],{"type":24,"value":533},"Q22: MindSpore中和DataLoader对应的接口是什么？",{"type":18,"tag":26,"props":535,"children":536},{},[537,539,546],{"type":24,"value":538},"A22：如果将DataLoader考虑为接收自定义Dataset的API接口，MindSpore数据处理API中和Dataloader较为相似的是GeneratorDataset，可接收用户自定义的Dataset，具体使用方式参考GeneratorDataset 文档，差异对比也可查看",{"type":18,"tag":36,"props":540,"children":543},{"href":541,"rel":542},"https://www.mindspore.cn/docs/zh-CN/master/note/api_mapping/pytorch_api_mapping.html",[40],[544],{"type":24,"value":545},"API算子映射表",{"type":24,"value":252},{"type":18,"tag":26,"props":548,"children":549},{},[550],{"type":24,"value":551},"Q23: 自定义的Dataset出现错误时，应该如何调试？",{"type":18,"tag":26,"props":553,"children":554},{},[555],{"type":24,"value":556},"A23：自定义的Dataset通常会传入到GeneratorDataset，在使用过程中错误指向了自定义的Dataset时，可通过一些方式进行调试（如增加打印信息，打印返回值的shape、dtype等），自定义Dataset通常要保持中间处理结果为numpy array，且不建议与MindSpore网络计算的算子混合使用。此外针对自定义的Dataset如下面的MyDataset，初始化后也可直接进行如下遍历（主要为简化调试，分析原始Dataset中的问题，可不传入GeneratorDataset)，调试遵循常规的Python语法规则。",{"type":18,"tag":127,"props":558,"children":560},{"code":559},"Dataset = MyDataset()for item in Dataset:\n\n   print(\"item:\", item)\n",[561],{"type":18,"tag":132,"props":562,"children":563},{"__ignoreMap":7},[564],{"type":24,"value":559},{"type":18,"tag":26,"props":566,"children":567},{},[568],{"type":24,"value":569},"Q24: 数据处理操作与网络计算算子能否混合使用？",{"type":18,"tag":26,"props":571,"children":572},{},[573],{"type":24,"value":574},"A24：通常数据处理操作与网络计算算子混合使用会导致性能有所降低，在缺少对应的数据处理操作且自定义Python操作不合适时可进行尝试。需要注意的是，因为二者需要的输入不一致，数据处理操作通常输入为numpy array 或 PIL Image，但网络计算算子输入需要是MindSpore.Tensor; 将二者混合使用需要使上一个的输出格式和下一个所需的输入格式一致。数据处理操作指的是官网API文档中mindspore.dataset模块下的接口，如 mindspore.dataset.vision.CenterCrop，网络计算算子包含 mindspore.nn、 mindspore.ops等模块下的算子。",{"type":18,"tag":26,"props":576,"children":577},{},[578],{"type":24,"value":579},"Q25: MindRecord为何会生成.db文件？ 缺少.db文件时加载数据集会有什么报错？",{"type":18,"tag":26,"props":581,"children":582},{},[583],{"type":24,"value":584},"A25：.db文件为MindRecord文件对应的索引文件，缺少.db文件通常会在获取数据集总的数据量时报错，错误提示如：MindRecordOp Count total rows failed。",{"type":18,"tag":26,"props":586,"children":587},{},[588],{"type":24,"value":589},"Q26: 自定义Dataset中如何进行图像读取并进行Decode操作？",{"type":18,"tag":26,"props":591,"children":592},{},[593],{"type":24,"value":594},"A26：传入GeneratorDataset的自定义Dataset，在接口内部（如__getitem__函数）进行图像读取后可以直接返回bytes类型的数据、numpy array类型的数组或已经做了解码操作的numpy array, 具体如下所示：",{"type":18,"tag":26,"props":596,"children":597},{},[598],{"type":24,"value":599},"1）读取图像后直接返回bytes类型的数据",{"type":18,"tag":127,"props":601,"children":603},{"code":602},"class ImageDataset:\n\n    def __init__(self, data_path):\n\n        self.data = data_path\n\n\n\n    def __getitem__(self, index):\n\n        # use file open and read method\n\n        f = open(self.data[index], 'rb')\n\n        img_bytes = f.read()\n\n        f.close()\n\n\n\n        # return bytes directly\n\n        return (img_bytes, )\n\n\n\n    def __len__(self):\n\n        return len(self.data)\n\n# data_path is a list of image file namedataset1 = ds.GeneratorDataset(ImageDataset(data_path), [\"data\"])decode_op = py_vision.Decode()to_tensor = py_vision.ToTensor(output_type=np.int32)dataset1 = dataset1.map(operations=[decode_op, to_tensor], input_columns=[\"data\"])\n",[604],{"type":18,"tag":132,"props":605,"children":606},{"__ignoreMap":7},[607],{"type":24,"value":602},{"type":18,"tag":26,"props":609,"children":610},{},[611],{"type":24,"value":612},"2）读取图像后返回numpy array",{"type":18,"tag":127,"props":614,"children":616},{"code":615},"# 在上面的用例中，对__getitem__函数可进行如下修改, Decode操作同上述用例一致def __getitem__(self, index):\n\n    # use np.fromfile to read image\n\n    img_np = np.fromfile(self.data[index])\n\n\n\n    # return Numpy array directly\n\n    return (img_np, )\n",[617],{"type":18,"tag":132,"props":618,"children":619},{"__ignoreMap":7},[620],{"type":24,"value":615},{"type":18,"tag":26,"props":622,"children":623},{},[624],{"type":24,"value":625},"3）读取图像后直接进行Decode操作",{"type":18,"tag":127,"props":627,"children":629},{"code":628},"# 依据上面的用例，对__getitem__函数可进行如下修改, 直接返回Decode之后的数据，此后可以不需要通过map执行Decode操作def __getitem__(self, index):\n\n    # use Image.Open to open file, and convert to RGC\n\n    img_rgb = Image.Open(self.data[index]).convert(\"RGB\")\n\n    return (img_rgb, )\n",[630],{"type":18,"tag":132,"props":631,"children":632},{"__ignoreMap":7},[633],{"type":24,"value":628},{"type":18,"tag":26,"props":635,"children":636},{},[637],{"type":24,"value":638},"Q27: 在使用Dataset处理数据过程中，报错RuntimeError: can't start new thread，怎么解决？",{"type":18,"tag":26,"props":640,"children":641},{},[642],{"type":24,"value":643},"A27: 主要原因是在使用**Dataset、.map(...)和.batch(...)时，参数num_parallel_workers配置过大，用户进程数达到最大，可以通过ulimit -u 最大进程数来增加用户最大进程数范围，或者将num_parallel_workers配置减小。",{"type":18,"tag":26,"props":645,"children":646},{},[647],{"type":24,"value":648},"Q28: 在使用GeneratorDataset加载数据时，报错RuntimeError: Failed to copy data into tensor.，怎么解决？",{"type":18,"tag":26,"props":650,"children":651},{},[652],{"type":24,"value":653},"A28: 在使用GeneratorDataset加载Pyfunc返回的Numpy array时，MindSpore框架将执行Numpy array到MindSpore Tensor的转换，假设Numpy array所指向的内存被释放，可能会发生内存拷贝的错误。举例如下：",{"type":18,"tag":26,"props":655,"children":656},{},[657],{"type":24,"value":658},"1）在__getitem__函数中执行Numpy array - MindSpore Tensor - Numpy array的就地转换。其中Tensor tensor和Numpy array ndarray_1共享同一块内存，Tensor tensor在__getitem__函数退出时超出作用域，其所指向的内存将被释放。",{"type":18,"tag":127,"props":660,"children":662},{"code":661},"class RandomAccessDataset:\n\n    def __init__(self):\n\n        pass\n\n\n\n    def __getitem__(self, item):\n\n        ndarray = np.zeros((544, 1056, 3))\n\n        tensor = Tensor.from_numpy(ndarray)\n\n        ndarray_1 = tensor.asnumpy()\n\n        return ndarray_1\n\n\n\n    def __len__(self):\n\n        return 8\n\ndata1 = ds.GeneratorDataset(RandomAccessDataset(), [\"data\"])\n",[663],{"type":18,"tag":132,"props":664,"children":665},{"__ignoreMap":7},[666],{"type":24,"value":661},{"type":18,"tag":26,"props":668,"children":669},{},[670],{"type":24,"value":671},"2）忽略上面例子中的循环转换，在__getitem__函数退出时，Tensor对象tensor被释放，和其共享同一块内存的Numpy array对象ndarray_1变成未知状态，为了规避此问题可以直接使用deepcopy函数为将返回的Numpy array对象ndarray_2申请独立的内存。",{"type":18,"tag":127,"props":673,"children":675},{"code":674},"class RandomAccessDataset:\n\n    def __init__(self):\n\n        pass\n\n\n\n    def __getitem__(self, item):\n\n        ndarray = np.zeros((544, 1056, 3))\n\n        tensor = Tensor.from_numpy(ndarray)\n\n        ndarray_1 = tensor.asnumpy()\n\n        ndarray_2 = copy.deepcopy(ndarray_1)\n\n        return ndarray_2\n\n\n\n    def __len__(self):\n\n        return 8\n\ndata1 = ds.GeneratorDataset(RandomAccessDataset(), [\"data\"])\n",[676],{"type":18,"tag":132,"props":677,"children":678},{"__ignoreMap":7},[679],{"type":24,"value":674},{"type":18,"tag":26,"props":681,"children":682},{},[683],{"type":24,"value":684},"Q29: 如何根据数据预处理退出状态判断GetNext超时原因？",{"type":18,"tag":26,"props":686,"children":687},{},[688],{"type":24,"value":689},"A29: 在使用数据下沉模式（此时 数据预处理 -> 发送队列 -> 网络计算 三者构成Pipeline模式）进行训练时，当出现GetNext超时报错，数据预处理模块会输出状态信息，帮助用户分析出错原因。用户可以在日志中看到如下几种情况，具体原因及改进方法可参考：",{"type":18,"tag":26,"props":691,"children":692},{},[693],{"type":24,"value":694},"1）当日志输出类似如下时，表示数据预处理没有产生任何可用于训练的数据。",{"type":18,"tag":127,"props":696,"children":698},{"code":697},"preprocess_batch: 0;batch_queue: ;\n\n            push_start_time -> push_end_time\n",[699],{"type":18,"tag":132,"props":700,"children":701},{"__ignoreMap":7},[702],{"type":24,"value":697},{"type":18,"tag":26,"props":704,"children":705},{},[706],{"type":24,"value":707},"改进方法：可以先循环数据集对象，确认数据集预处理是否正常。",{"type":18,"tag":26,"props":709,"children":710},{},[711],{"type":24,"value":712},"2）当日志输出类似如下时，表示数据预处理产生了一条数据，但是仍未发送到设备侧。",{"type":18,"tag":127,"props":714,"children":716},{"code":715},"preprocess_batch: 0;batch_queue: 1;\n\n            push_start_time -> push_end_time2022-05-09-11:36:00.521.386 ->\n",[717],{"type":18,"tag":132,"props":718,"children":719},{"__ignoreMap":7},[720],{"type":24,"value":715},{"type":18,"tag":26,"props":722,"children":723},{},[724],{"type":24,"value":725},"改进方法：可以查看设备plog是否有报错信息。",{"type":18,"tag":26,"props":727,"children":728},{},[729],{"type":24,"value":730},"3）当日志输出类似如下时，表示数据预处理产生了三条数据，并且都已经发送到设备侧，同时正在预处理第4条数据。",{"type":18,"tag":127,"props":732,"children":734},{"code":733},"preprocess_batch: 3;batch_queue: 1, 0, 1;\n\n            push_start_time -> push_end_time2022-05-09-11:36:00.521.386 -> 2022-05-09-11:36:00.782.2152022-05-09-11:36:01.212.621 -> 2022-05-09-11:36:01.490.1392022-05-09-11:36:01.893.412 -> 2022-05-09-11:36:02.006.771\n",[735],{"type":18,"tag":132,"props":736,"children":737},{"__ignoreMap":7},[738],{"type":24,"value":733},{"type":18,"tag":26,"props":740,"children":741},{},[742,744,749],{"type":24,"value":743},"改进方法：查看最后一条 push_end_time 时间 与 GetNext报错时间，如果超过默认GetNext超时时间（默认：1900s，且可通过 mindspore.set_context(op_timeout=xx) 来进行修改），说明数据预处理性能差，可参考 ",{"type":18,"tag":36,"props":745,"children":747},{"href":365,"rel":746},[40],[748],{"type":24,"value":369},{"type":24,"value":750}," 对数据预处理部分进行优化。",{"type":18,"tag":26,"props":752,"children":753},{},[754],{"type":24,"value":755},"4）当日志输出类似如下时，表示数据预处理产生了182条数据，正在向设备发送第183条数据。",{"type":18,"tag":127,"props":757,"children":759},{"code":758},"preprocess_batch: 182;batch_queue: 1, 0, 1, 1, 2, 1, 0, 1, 1, 0;\n\n            push_start_time -> push_end_time\n\n                            -> 2022-05-09-14:31:00.603.8662022-05-09-14:31:00.621.146 -> 2022-05-09-14:31:01.018.9642022-05-09-14:31:01.043.705 -> 2022-05-09-14:31:01.396.6502022-05-09-14:31:01.421.501 -> 2022-05-09-14:31:01.807.6712022-05-09-14:31:01.828.931 -> 2022-05-09-14:31:02.179.9452022-05-09-14:31:02.201.960 -> 2022-05-09-14:31:02.555.9412022-05-09-14:31:02.584.413 -> 2022-05-09-14:31:02.943.8392022-05-09-14:31:02.969.583 -> 2022-05-09-14:31:03.309.2992022-05-09-14:31:03.337.607 -> 2022-05-09-14:31:03.684.0342022-05-09-14:31:03.717.230 -> 2022-05-09-14:31:04.038.5212022-05-09-14:31:04.064.571 ->\n",[760],{"type":18,"tag":132,"props":761,"children":762},{"__ignoreMap":7},[763],{"type":24,"value":758},{"type":18,"tag":26,"props":765,"children":766},{},[767],{"type":24,"value":725},{"title":7,"searchDepth":769,"depth":769,"links":770},4,[],"markdown","content:technology-blogs:zh:2560.md","content","technology-blogs/zh/2560.md","technology-blogs/zh/2560","md",1776506122089]