mindspore.dataset.GeneratorDataset

class mindspore.dataset.GeneratorDataset(source, column_names=None, column_types=None, schema=None, num_samples=None, num_parallel_workers=1, shuffle=None, sampler=None, num_shards=None, shard_id=None, python_multiprocessing=True)[source]

A source dataset that generates data from Python by invoking Python data source each epoch.

This dataset can take in a sampler. ‘sampler’ and ‘shuffle’ are mutually exclusive. The table below shows what input arguments are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’

Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

Parameters
  • source (Union[Callable, Iterable, Random Accessible]) – A generator callable object, an iterable Python object or a random accessible Python object. Callable source is required to return a tuple of NumPy arrays as a row of the dataset on source().next(). Iterable source is required to return a tuple of NumPy arrays as a row of the dataset on iter(source).next(). Random accessible source is required to return a tuple of NumPy arrays as a row of the dataset on source[idx].

  • column_names (Union[str, list[str]], optional) – List of column names of the dataset (default=None). Users are required to provide either column_names or schema.

  • column_types (list[mindspore.dtype], optional) – List of column data types of the dataset (default=None). If provided, sanity check will be performed on generator output.

  • schema (Union[Schema, str], optional) – Path to the JSON schema file or schema object (default=None). Users are required to provide either column_names or schema. If both are provided, schema will be used.

  • num_samples (int, optional) – The number of samples to be included in the dataset (default=None, all images).

  • num_parallel_workers (int, optional) – Number of subprocesses used to fetch the dataset in parallel (default=1).

  • shuffle (bool, optional) – Whether or not to perform shuffle on the dataset. Random accessible input is required. (default=None, expected order behavior shown in the table).

  • sampler (Union[Sampler, Iterable], optional) – Object used to choose samples from the dataset. Random accessible input is required (default=None, expected order behavior shown in the table).

  • num_shards (int, optional) – Number of shards that the dataset will be divided into (default=None). When this argument is specified, ‘num_samples’ will not used. Random accessible input is required.

  • shard_id (int, optional) – The shard ID within num_shards (default=None). This argument must be specified only when num_shards is also specified. Random accessible input is required.

  • python_multiprocessing (bool, optional) – Parallelize Python operations with multiple worker process. This option could be beneficial if the Python operation is computational heavy (default=True).

Examples

>>> import mindspore.dataset as ds
>>>
>>> # 1) Multidimensional generator function as callable input
>>> def GeneratorMD():
>>>     for i in range(64):
>>>         yield (np.array([[i, i + 1], [i + 2, i + 3]]),)
>>> # Create multi_dimension_generator_dataset with GeneratorMD and column name "multi_dimensional_data"
>>> multi_dimension_generator_dataset = ds.GeneratorDataset(GeneratorMD, ["multi_dimensional_data"])
>>>
>>> # 2) Multi-column generator function as callable input
>>> def GeneratorMC(maxid = 64):
>>>     for i in range(maxid):
>>>         yield (np.array([i]), np.array([[i, i + 1], [i + 2, i + 3]]))
>>> # Create multi_column_generator_dataset with GeneratorMC and column names "col1" and "col2"
>>> multi_column_generator_dataset = ds.GeneratorDataset(GeneratorMC, ["col1", "col2"])
>>>
>>> # 3) Iterable dataset as iterable input
>>> class MyIterable():
>>>     def __iter__(self):
>>>         return # User implementation
>>> # Create iterable_generator_dataset with MyIterable object
>>> iterable_generator_dataset = ds.GeneratorDataset(MyIterable(), ["col1"])
>>>
>>> # 4) Random accessible dataset as random accessible input
>>> class MyRA():
>>>     def __getitem__(self, index):
>>>         return # User implementation
>>> # Create ra_generator_dataset with MyRA object
>>> ra_generator_dataset = ds.GeneratorDataset(MyRA(), ["col1"])
>>> # List/Dict/Tuple is also random accessible
>>> list_generator = ds.GeneratorDataset([(np.array(0),), (np.array(1)), (np.array(2))], ["col1"])
>>>
>>> # 5) Built-in Sampler
>>> my_generator = ds.GeneratorDataset(my_ds, ["img", "label"], sampler=samplers.RandomSampler())
apply(apply_func)

Apply a function in this dataset.

Parameters

apply_func (function) – A function that must take one ‘Dataset’ as an argument and return a preprogressing ‘Dataset’.

Returns

Dataset, dataset applied by the function.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object
>>>
>>> # Declare an apply_func function which returns a Dataset object
>>> def apply_func(ds):
>>>     ds = ds.batch(2)
>>>     return ds
>>>
>>> # Use apply to call apply_func
>>> data = data.apply(apply_func)
Raises
  • TypeError – If apply_func is not a function.

  • TypeError – If apply_func doesn’t return a Dataset.

batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None, output_columns=None, column_order=None, pad_info=None, python_multiprocessing=False)

Combine batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row. For any column, all the elements within that column must have the same shape. If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches and per_batch_map. It is recommended that the repeat operation be used after the batch operation.

Parameters
  • batch_size (int or function) – The number of rows each batch is created with. An int or callable which takes exactly 1 parameter, BatchInfo.

  • drop_remainder (bool, optional) – Determines whether or not to drop the last possibly incomplete batch (default=False). If True, and if there are less than batch_size rows available to make the last batch, then those rows will be dropped and not propagated to the child node.

  • num_parallel_workers (int, optional) – Number of workers to process the dataset in parallel (default=None).

  • per_batch_map (callable, optional) – Per batch map callable. A callable which takes (list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represents a batch of Tensors on a given column. The number of lists should match with number of entries in input_columns. The last parameter of the callable should always be a BatchInfo object. Per_batch_map should return (list[Tensor], list[Tensor], …). The length of each list in output should be same as the input. output_columns is required if the number of output lists is different from input.

  • input_columns (Union[str, list[str]], optional) – List of names of the input columns. The size of the list should match with signature of per_batch_map callable.

  • output_columns (Union[str, list[str]], optional) – List of names assigned to the columns outputted by the last operation. This parameter is mandatory if len(input_columns) != len(output_columns). The size of this list must match the number of output columns of the last operation. (default=None, output columns will have the same name as the input columns, i.e., the columns will be replaced).

  • column_order (Union[str, list[str]], optional) – List of all the desired columns to propagate to the child node. This list must be a subset of all the columns in the dataset after all operations are applied. The order of the columns in each row propagated to the child node follow the order they appear in this list. The parameter is mandatory if the len(input_columns) != len(output_columns). (default=None, all columns will be propagated to the child node, the order of the columns will remain the same).

  • pad_info (dict, optional) – Whether to perform padding on selected columns. pad_info={“col1”:([224,224],0)} would pad column with name “col1” to a tensor of size [224,224] and fill the missing with 0.

  • python_multiprocessing (bool, optional) – Parallelize Python function per_batch_map with multiple worker processes. This option could be beneficial if the function is computational heavy (default=False).

Returns

BatchDataset, dataset batched.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object.
>>>
>>> # Create a dataset where every 100 rows is combined into a batch
>>> # and drops the last incomplete batch if there is one.
>>> data = data.batch(100, True)
>>>
>>> # resize image according to its batch number, if it's 5-th batch, resize to (5^2, 5^2) = (25, 25)
>>> def np_resize(col, batchInfo):
>>>     output = col.copy()
>>>     s = (batchInfo.get_batch_num() + 1) ** 2
>>>     index = 0
>>>     for c in col:
>>>         img = Image.fromarray(c.astype('uint8')).convert('RGB')
>>>         img = img.resize((s, s), Image.ANTIALIAS)
>>>         output[index] = np.array(img)
>>>         index += 1
>>>     return (output,)
>>> data = data.batch(batch_size=8, input_columns=["image"], per_batch_map=np_resize)
bucket_batch_by_length(column_names, bucket_boundaries, bucket_batch_sizes, element_length_function=None, pad_info=None, pad_to_bucket_boundary=False, drop_remainder=False)

Bucket elements according to their lengths. Each bucket will be padded and batched when they are full.

A length function is called on each row in the dataset. The row is then bucketed based on its length and bucket_boundaries. When a bucket reaches its corresponding size specified in bucket_batch_sizes, the entire bucket will be padded according to batch_info, and then batched. Each batch will be full, except for maybe the last batch for each bucket.

Parameters
  • column_names (list[str]) – Columns passed to element_length_function.

  • bucket_boundaries (list[int]) – A list consisting of the upper boundaries of the buckets. Must be strictly increasing. If there are n boundaries, n+1 buckets are created: One bucket for [0, bucket_boundaries[0]), one bucket for [bucket_boundaries[i], bucket_boundaries[i+1]) for each 0<i<n, and one bucket for [bucket_boundaries[n-1], inf).

  • bucket_batch_sizes (list[int]) – A list consisting of the batch sizes for each bucket. Must contain len(bucket_boundaries)+1 elements.

  • element_length_function (Callable, optional) – A function that takes in len(column_names) arguments and returns an int. If no value is provided, then len(column_names) must be 1, and the size of the first dimension of that column will be taken as the length (default=None).

  • pad_info (dict, optional) – Represents how to batch each column. The key corresponds to the column name, and the value must be a tuple of 2 elements. The first element corresponds to the shape to pad to, and the second element corresponds to the value to pad with. If a column is not specified, then that column will be padded to the longest in the current batch, and 0 will be used as the padding value. Any None dimensions will be padded to the longest in the current batch, unless if pad_to_bucket_boundary is True. If no padding is wanted, set pad_info to None (default=None).

  • pad_to_bucket_boundary (bool, optional) – If True, will pad each None dimension in pad_info to the bucket_boundary minus 1. If there are any elements that fall into the last bucket, an error will occur (default=False).

  • drop_remainder (bool, optional) – If True, will drop the last batch for each bucket if it is not a full batch (default=False).

Returns

BucketBatchByLengthDataset, dataset bucketed and batched by length.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object.
>>>
>>> # Create a dataset where every 100 rows is combined into a batch
>>> # and drops the last incomplete batch if there is one.
>>> column_names = ["col1", "col2"]
>>> bucket_boundaries = [5, 10]
>>> bucket_batch_sizes = [5, 1, 1]
>>> element_length_function = (lambda col1, col2: max(len(col1), len(col2)))
>>>
>>> # Will pad col1 to shape [2, bucket_boundaries[i]] where i is the
>>> # index of the bucket that is currently being batched.
>>> # Will pad col2 to a shape where each dimension is the longest in all
>>> # the elements currently being batched.
>>> pad_info = {"col1", ([2, None], -1)}
>>> pad_to_bucket_boundary = True
>>>
>>> data = data.bucket_batch_by_length(column_names, bucket_boundaries,
>>>                                    bucket_batch_sizes,
>>>                                    element_length_function, pad_info,
>>>                                    pad_to_bucket_boundary)
build_sentencepiece_vocab(columns, vocab_size, character_coverage, model_type, params)

Function to create a SentencePieceVocab from source dataset

Build a SentencePieceVocab from a dataset.

Parameters
  • columns (list[str]) – Column names to get words from.

  • vocab_size (int) – Vocabulary size.

  • character_coverage (int) – Percentage of characters covered by the model, must be between 0.98 and 1.0 Good defaults are: 0.9995 for languages with rich character sets like Japanese or Chinese character sets, and 1.0 for other languages with small character sets.

  • model_type (SentencePieceModel) – Model type. Choose from unigram (default), bpe, char, or word. The input sentence must be pretokenized when using word type.

  • params (dict) – contains more optional parameters of sentencepiece library

Returns

SentencePieceVocab, vocab built from the dataset.

Example

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object
>>> data = data.build_sentencepiece_vocab(columns=["column3", "column1", "column2"], vocab_size=5000,
>>>                                       character_coverage=0.9995, model_type=SentencePieceModel.Unigram,
>>>                                       params={})
build_vocab(columns, freq_range, top_k, special_tokens, special_first)

Function to create a Vocab from source dataset

Build a vocab from a dataset. This would collect all the unique words in a dataset and return a vocab which contains top_k most frequent words (if top_k is specified)

Parameters
  • columns (Union[str, list[str]]) – Column names to get words from.

  • freq_range (tuple[int]) – A tuple of integers (min_frequency, max_frequency). Words within the frequency range would be kept. 0 <= min_frequency <= max_frequency <= total_words. min_frequency/max_frequency an be set to default, which corresponds to 0/total_words separately

  • top_k (int) – Number of words to be built into vocab. top_k most frequent words are taken. The top_k is taken after freq_range. If not enough top_k, all words will be taken

  • special_tokens (list[str]) – A list of strings, each one is a special token

  • special_first (bool) – Whether special_tokens will be prepended/appended to vocab, If special_tokens is specified and special_first is set to default, special_tokens will be prepended

Returns

Vocab, vocab built from dataset.

Example

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object
>>> data = data.build_vocab(columns=["column3", "column1", "column2"], freq_range=(1, 10), top_k=5,
>>>                         special_tokens=["<pad>", "<unk>"], special_first=True)
concat(datasets)

Concatenate the datasets in the input list of datasets. The “+” operator is also supported to concatenate.

Note

The column name, and rank and type of the column data must be the same in the input datasets.

Parameters

datasets (Union[list, class Dataset]) – A list of datasets or a single class Dataset to be concatenated together with this dataset.

Returns

ConcatDataset, dataset concatenated.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # ds1 and ds2 are instances of Dataset object
>>>
>>> # Create a dataset by concatenating ds1 and ds2 with "+" operator
>>> data1 = ds1 + ds2
>>> # Create a dataset by concatenating ds1 and ds2 with concat operation
>>> data1 = ds1.concat(ds2)
create_dict_iterator(num_epochs=- 1, output_numpy=False)

Create an iterator over the dataset. The data retrieved will be a dictionary.

The order of the columns in the dictionary may not be the same as the original order.

Parameters
  • num_epochs (int, optional) – Maximum number of epochs that iterator can be iterated (default=-1, iterator can be iterated infinite number of epochs).

  • output_numpy (bool, optional) – Whether or not to output NumPy datatype, if output_numpy=False, iterator will output MSTensor (default=False).

Returns

DictIterator, dictionary iterator over the dataset.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object
>>>
>>> # create an iterator
>>> # The columns in the data obtained by the iterator might be changed.
>>> iterator = data.create_dict_iterator()
>>> for item in iterator:
>>>     # print the data in column1
>>>     print(item["column1"])
create_ir_tree()

Internal method to create an IR tree.

Returns

DatasetNode, the root node of the IR tree. Dataset, the root dataset of the IR tree.

create_tuple_iterator(columns=None, num_epochs=- 1, output_numpy=False, do_copy=True)

Create an iterator over the dataset. The data retrieved will be a list of ndarrays of data.

To specify which columns to list and the order needed, use columns_list. If columns_list is not provided, the order of the columns will not be changed.

Parameters
  • columns (list[str], optional) – List of columns to be used to specify the order of columns (default=None, means all columns).

  • num_epochs (int, optional) – Maximum number of epochs that iterator can be iterated. (default=-1, iterator can be iterated infinite number of epochs)

  • output_numpy (bool, optional) – Whether or not to output NumPy datatype. If output_numpy=False, iterator will output MSTensor (default=False).

  • do_copy (bool, optional) – when output data type is mindspore.Tensor, use this param to select the conversion method, only take False for better performance (default=True).

Returns

TupleIterator, tuple iterator over the dataset.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object
>>>
>>> # Create an iterator
>>> # The columns in the data obtained by the iterator will not be changed.
>>> iterator = data.create_tuple_iterator()
>>> for item in iterator:
>>>     # convert the returned tuple to a list and print
>>>     print(list(item))
device_que(prefetch_size=None, send_epoch_end=True, create_data_info_queue=False)

Return a transferred Dataset that transfers data through a device.

Parameters
  • prefetch_size (int, optional) – Prefetch number of records ahead of the user’s request (default=None).

  • send_epoch_end (bool, optional) – Whether to send end of sequence to device or not (default=True).

  • create_data_info_queue (bool, optional) – Whether to create queue which stores types and shapes of data or not(default=False).

Note

If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M.

Returns

TransferDataset, dataset for transferring.

filter(predicate, input_columns=None, num_parallel_workers=1)

Filter dataset by predicate.

Note

If input_columns not provided or empty, all columns will be used.

Parameters
  • predicate (callable) – Python callable which returns a boolean value. If False then filter the element.

  • input_columns (Union[str, list[str]], optional) – List of names of the input columns, when default=None, the predicate will be applied on all columns in the dataset.

  • num_parallel_workers (int, optional) – Number of workers to process the dataset in parallel (default=None).

Returns

FilterDataset, dataset filtered.

Examples

>>> import mindspore.dataset as ds
>>> # generator data(0 ~ 63)
>>> # filter the data that greater than or equal to 11
>>> dataset_f = dataset.filter(predicate=lambda data: data < 11, input_columns = ["data"])
flat_map(func)

Map func to each row in dataset and flatten the result.

The specified func is a function that must take one ‘Ndarray’ as input and return a ‘Dataset’.

Parameters

func (function) – A function that must take one ‘Ndarray’ as an argument and return a ‘Dataset’.

Returns

Dataset, dataset applied by the function.

Examples

>>> import mindspore.dataset as ds
>>> import mindspore.dataset.text as text
>>>
>>> # Declare a function which returns a Dataset object
>>> def flat_map_func(x):
>>>     data_dir = text.to_str(x[0])
>>>     d = ds.ImageFolderDataset(data_dir)
>>>     return d
>>> # data is an instance of a Dataset object.
>>> data = ds.TextFileDataset(DATA_FILE)
>>> data = data.flat_map(flat_map_func)
Raises
  • TypeError – If func is not a function.

  • TypeError – If func doesn’t return a Dataset.

get_args()

Return attributes (member variables) related to the current class.

Must include all arguments passed to the __init__() of the current class, excluding ‘input_dataset’.

Args:

Returns

dict, attributes related to the current class.

get_batch_size()

Get the size of a batch.

Returns

int, the number of data in a batch.

get_class_indexing()

Get the class index.

Returns

dict, a str-to-int mapping from label name to index. dict, a str-to-list<int> mapping from label name to index for Coco ONLY. The second number in the list is used to indicate the super category

get_col_names()

Get names of the columns in the dataset

Returns

list, list of column names in the dataset.

get_dataset_size()

Get the number of batches in an epoch.

Returns

int, number of batches.

get_repeat_count()

Get the replication times in RepeatDataset else 1.

Returns

int, the count of repeat.

property input_indexs

Get Input Index Information

Returns

tuple, tuple of the input index information.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object
>>> data = ds.NumpySlicesDataset([1, 2, 3], column_names=["col_1"])
>>> print(data.input_indexs())
map(operations, input_columns=None, output_columns=None, column_order=None, num_parallel_workers=None, python_multiprocessing=False, cache=None, callbacks=None)

Apply each operation in operations to this dataset.

The order of operations is determined by the position of each operation in the operations parameter. operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero or more columns will be outputted. The first operation will be passed the columns specified in input_columns as input. If there is more than one operator in operations, the outputted columns of the previous operation are used as the input columns for the next operation. The columns outputted by the very last operation will be assigned names specified by output_columns.

Only the columns specified in column_order will be propagated to the child node. These columns will be in the same order as specified in column_order.

Parameters
  • operations (Union[list[TensorOp], list[functions]]) – List of operations to be applied on the dataset. Operations are applied in the order they appear in this list.

  • input_columns (Union[str, list[str]], optional) – List of the names of the columns that will be passed to the first operation as input. The size of this list must match the number of input columns expected by the first operator. (default=None, the first operation will be passed however many columns that is required, starting from the first column).

  • output_columns (Union[str, list[str]], optional) – List of names assigned to the columns outputted by the last operation. This parameter is mandatory if len(input_columns) != len(output_columns). The size of this list must match the number of output columns of the last operation. (default=None, output columns will have the same name as the input columns, i.e., the columns will be replaced).

  • column_order (list[str], optional) – List of all the desired columns to propagate to the child node. This list must be a subset of all the columns in the dataset after all operations are applied. The order of the columns in each row propagated to the child node follow the order they appear in this list. The parameter is mandatory if the len(input_columns) != len(output_columns). (default=None, all columns will be propagated to the child node, the order of the columns will remain the same).

  • num_parallel_workers (int, optional) – Number of threads used to process the dataset in parallel (default=None, the value from the configuration will be used).

  • python_multiprocessing (bool, optional) – Parallelize Python operations with multiple worker processes. This option could be beneficial if the Python operation is computational heavy (default=False).

  • cache (DatasetCache, optional) – Use tensor caching service to speed up dataset processing. (default=None which means no cache is used).

  • callbacks – (DSCallback, list[DSCallback], optional): List of Dataset callbacks to be called (Default=None).

Returns

MapDataset, dataset after mapping operation.

Examples

>>> import mindspore.dataset as ds
>>> import mindspore.dataset.vision.c_transforms as c_transforms
>>>
>>> # data is an instance of Dataset which has 2 columns, "image" and "label".
>>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2".
>>> # Each column is a 2D array of integers.
>>>
>>> # Set the global configuration value for num_parallel_workers to be 2.
>>> # Operations which use this configuration value will use 2 worker threads,
>>> # unless otherwise specified in the operator's constructor.
>>> # set_num_parallel_workers can be called again later if a different
>>> # global configuration value for the number of worker threads is desired.
>>> ds.config.set_num_parallel_workers(2)
>>>
>>> # Define two operations, where each operation accepts 1 input column and outputs 1 column.
>>> decode_op = c_transforms.Decode(rgb_format=True)
>>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
>>>
>>> # 1) Simple map example
>>>
>>> operations = [decode_op]
>>> input_columns = ["image"]
>>>
>>> # Apply decode_op on column "image". This column will be replaced by the outputted
>>> # column of decode_op. Since column_order is not provided, both columns "image"
>>> # and "label" will be propagated to the child node in their original order.
>>> ds_decoded = data.map(operations, input_columns)
>>>
>>> # Rename column "image" to "decoded_image".
>>> output_columns = ["decoded_image"]
>>> ds_decoded = data.map(operations, input_columns, output_columns)
>>>
>>> # Specify the order of the columns.
>>> column_order ["label", "image"]
>>> ds_decoded = data.map(operations, input_columns, None, column_order)
>>>
>>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
>>> column_order ["label", "decoded_image"]
>>> output_columns = ["decoded_image"]
>>> ds_decoded = data.map(operations, input_columns, output_columns, column_order)
>>>
>>> # Rename column "image" to "decoded_image" and keep only this column.
>>> column_order ["decoded_image"]
>>> output_columns = ["decoded_image"]
>>> ds_decoded = data.map(operations, input_columns, output_columns, column_order)
>>>
>>> # A simple example using pyfunc: Renaming columns and specifying column order
>>> # work in the same way as the previous examples.
>>> input_columns = ["col0"]
>>> operations = [(lambda x: x + 1)]
>>> ds_mapped = ds_pyfunc.map(operations, input_columns)
>>>
>>> # 2) Map example with more than one operation
>>>
>>> # If this list of operations is used with map, decode_op will be applied
>>> # first, then random_jitter_op will be applied.
>>> operations = [decode_op, random_jitter_op]
>>>
>>> input_columns = ["image"]
>>>
>>> # Create a dataset where the images are decoded, then randomly color jittered.
>>> # decode_op takes column "image" as input and outputs one column. The column
>>> # outputted by decode_op is passed as input to random_jitter_op.
>>> # random_jitter_op will output one column. Column "image" will be replaced by
>>> # the column outputted by random_jitter_op (the very last operation). All other
>>> # columns are unchanged. Since column_order is not specified, the order of the
>>> # columns will remain the same.
>>> ds_mapped = data.map(operations, input_columns)
>>>
>>> # Create a dataset that is identical to ds_mapped, except the column "image"
>>> # that is outputted by random_jitter_op is renamed to "image_transformed".
>>> # Specifying column order works in the same way as examples in 1).
>>> output_columns = ["image_transformed"]
>>> ds_mapped_and_renamed = data.map(operation, input_columns, output_columns)
>>>
>>> # Multiple operations using pyfunc: Renaming columns and specifying column order
>>> # work in the same way as examples in 1).
>>> input_columns = ["col0"]
>>> operations = [(lambda x: x + x), (lambda x: x - 1)]
>>> output_columns = ["col0_mapped"]
>>> ds_mapped = ds_pyfunc.map(operations, input_columns, output_columns)
>>>
>>> # 3) Example where number of input columns is not equal to number of output columns
>>>
>>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
>>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
>>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
>>> #
>>> # Note: The number of output columns of operation[i] must equal the number of
>>> # input columns of operation[i+1]. Otherwise, this map call will also result
>>> # in an error.
>>> operations = [(lambda x y: (x, x + y, x + y + 1)),
>>>               (lambda x y z: x * y * z),
>>>               (lambda x: (x % 2, x % 3, x % 5, x % 7))]
>>>
>>> # Note: Since the number of input columns is not the same as the number of
>>> # output columns, the output_columns and column_order parameters must be
>>> # specified. Otherwise, this map call will also result in an error.
>>> input_columns = ["col2", "col0"]
>>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
>>>
>>> # Propagate all columns to the child node in this order:
>>> column_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
>>> ds_mapped = ds_pyfunc.map(operations, input_columns, output_columns, column_order)
>>>
>>> # Propagate some columns to the child node in this order:
>>> column_order = ["mod7", "mod3", "col1"]
>>> ds_mapped = ds_pyfunc.map(operations, input_columns, output_columns, column_order)
num_classes()

Get the number of classes in a dataset.

Returns

int, number of classes.

output_shapes()

Get the shapes of output data.

Returns

list, list of shapes of each column.

output_types()

Get the types of output data.

Returns

list, list of data types.

parse_tree()

Internal method to parse the API tree into an IR tree.

Returns

DatasetNode, the root node of the IR tree.

project(columns)

Project certain columns in input dataset.

The specified columns will be selected from the dataset and passed down the pipeline in the order specified. The other columns are discarded.

Parameters

columns (Union[str, list[str]]) – List of names of the columns to project.

Returns

ProjectDataset, dataset projected.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object
>>> columns_to_project = ["column3", "column1", "column2"]
>>>
>>> # Create a dataset that consists of column3, column1, column2
>>> # in that order, regardless of the original order of columns.
>>> data = data.project(columns=columns_to_project)
rename(input_columns, output_columns)

Rename the columns in input datasets.

Parameters
  • input_columns (Union[str, list[str]]) – List of names of the input columns.

  • output_columns (Union[str, list[str]]) – List of names of the output columns.

Returns

RenameDataset, dataset renamed.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object.
>>> input_columns = ["input_col1", "input_col2", "input_col3"]
>>> output_columns = ["output_col1", "output_col2", "output_col3"]
>>>
>>> # Create a dataset where input_col1 is renamed to output_col1, and
>>> # input_col2 is renamed to output_col2, and input_col3 is renamed
>>> # to output_col3.
>>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
repeat(count=None)

Repeat this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. It is recommended that the repeat operation be used after the batch operation. If dataset_sink_mode is False, the repeat operation is invalid. If dataset_sink_mode is True, repeat count must be equal to the epoch of training. Otherwise, errors could occur since the amount of data is not the amount training requires.

Parameters

count (int) – Number of times the dataset is repeated (default=None).

Returns

RepeatDataset, dataset repeated.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object.
>>>
>>> # Create a dataset where the dataset is repeated for 50 epochs
>>> repeated = data.repeat(50)
>>>
>>> # Create a dataset where each epoch is shuffled individually
>>> shuffled_and_repeated = data.shuffle(10)
>>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
>>>
>>> # Create a dataset where the dataset is first repeated for
>>> # 50 epochs before shuffling. The shuffle operator will treat
>>> # the entire 50 epochs as one big dataset.
>>> repeat_and_shuffle = data.repeat(50)
>>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
reset()

Reset the dataset for next epoch.

save(file_name, num_files=1, file_type='mindrecord')

Save the dynamic data processed by the dataset pipeline in common dataset format. Supported dataset formats: ‘mindrecord’ only

Implicit type casting exists when saving data as ‘mindrecord’. The table below shows how to do type casting.

Implicit Type Casting when Saving as ‘mindrecord’

Type in ‘dataset’

Type in ‘mindrecord’

Details

bool

None

Not supported

int8

int32

uint8

bytes(1D uint8)

Drop dimension

int16

int32

uint16

int32

int32

int32

uint32

int64

int64

int64

uint64

None

Not supported

float16

float32

float32

float32

float64

float64

string

string

Multi-dimensional string not supported

Note

  1. To save the samples in order, set dataset’s shuffle to False and num_files to 1.

  2. Before calling the function, do not use batch operator, repeat operator or data augmentation operators with random attribute in map operator.

  3. Can not save number type tensor whose shape is dynamic.

  4. Mindrecord does not support DE_UINT64, multi-dimensional DE_UINT8(drop dimension) nor multi-dimensional DE_STRING.

Parameters
  • file_name (str) – Path to dataset file.

  • num_files (int, optional) – Number of dataset files (default=1).

  • file_type (str, optional) – Dataset format (default=’mindrecord’).

shuffle(buffer_size)

Randomly shuffles the rows of this dataset using the following algorithm:

  1. Make a shuffle buffer that contains the first buffer_size rows.

  2. Randomly select an element from the shuffle buffer to be the next row propogated to the child node.

  3. Get the next row (if any) from the parent node and put it in the shuffle buffer.

  4. Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequent epoch, the seed is changed to a new one, randomly generated value.

Parameters

buffer_size (int) – The size of the buffer (must be larger than 1) for shuffling. Setting buffer_size equal to the number of rows in the entire dataset will result in a global shuffle.

Returns

ShuffleDataset, dataset shuffled.

Raises

RuntimeError – If exist sync operators before shuffle.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object.
>>> # Optionally set the seed for the first epoch
>>> ds.config.set_seed(58)
>>>
>>> # Create a shuffled dataset using a shuffle buffer of size 4
>>> data = data.shuffle(4)
skip(count)

Skip the first N elements of this dataset.

Parameters

count (int) – Number of elements in the dataset to be skipped.

Returns

SkipDataset, dataset skipped.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object.
>>> # Create a dataset which skips first 3 elements from data
>>> data = data.skip(3)
split(sizes, randomize=True)

Split the dataset into smaller, non-overlapping datasets.

Parameters
  • sizes (Union[list[int], list[float]]) –

    If a list of integers [s1, s2, …, sn] is provided, the dataset will be split into n datasets of size s1, size s2, …, size sn respectively. If the sum of all sizes does not equal the original dataset size, an error will occur. If a list of floats [f1, f2, …, fn] is provided, all floats must be between 0 and 1 and must sum to 1, otherwise an error will occur. The dataset will be split into n Datasets of size round(f1*K), round(f2*K), …, round(fn*K) where K is the size of the original dataset. If after rounding:

    • Any size equals 0, an error will occur.

    • The sum of split sizes < K, the difference will be added to the first split.

    • The sum of split sizes > K, the difference will be removed from the first large enough split such that it will have atleast 1 row after removing the difference.

  • randomize (bool, optional) – Determines whether or not to split the data randomly (default=True). If True, the data will be randomly split. Otherwise, each split will be created with consecutive rows from the dataset.

Note

  1. There is an optimized split function, which will be called automatically when the dataset that calls this function is a MappableDataset.

  2. Dataset should not be sharded if split is going to be called. Instead, create a DistributedSampler and specify a split to shard after splitting. If dataset is sharded after a split, it is strongly recommended to set the same seed in each instance of execution, otherwise each shard may not be part of the same split (see Examples).

  3. It is strongly recommended to not shuffle the dataset, but use randomize=True instead. Shuffling the dataset may not be deterministic, which means the data in each split will be different in each epoch. Furthermore, if sharding occurs after split, each shard may not be part of the same split.

Raises
  • RuntimeError – If get_dataset_size returns None or is not supported for this dataset.

  • RuntimeError – If sizes is list of integers and sum of all elements in sizes does not equal the dataset size.

  • RuntimeError – If sizes is list of float and there is a split with size 0 after calculations.

  • RuntimeError – If the dataset is sharded prior to calling split.

  • ValueError – If sizes is list of float and not all floats are between 0 and 1, or if the floats don’t sum to 1.

Returns

tuple(Dataset), a tuple of datasets that have been split.

Examples

>>> import mindspore.dataset as ds
>>>
>>> dataset_dir = "/path/to/imagefolder_directory"
>>>
>>> # Since many datasets have shuffle on by default, set shuffle to False if split will be called!
>>> data = ds.ImageFolderDataset(dataset_dir, shuffle=False)
>>>
>>> # Set the seed, and tell split to use this seed when randomizing.
>>> # This is needed because sharding will be done later
>>> ds.config.set_seed(58)
>>> train, test = data.split([0.9, 0.1])
>>>
>>> # To shard the train dataset, use a DistributedSampler
>>> train_sampler = ds.DistributedSampler(10, 2)
>>> train.use_sampler(train_sampler)
sync_update(condition_name, num_batch=None, data=None)

Release a blocking condition and trigger callback with given data.

Parameters
  • condition_name (str) – The condition name that is used to toggle sending next row.

  • num_batch (Union[int, None]) – The number of batches (rows) that are released. When num_batch is None, it will default to the number specified by the sync_wait operator (default=None).

  • data (Any) – The data passed to the callback, user defined (default=None).

sync_wait(condition_name, num_batch=1, callback=None)

Add a blocking condition to the input Dataset.

Parameters
  • condition_name (str) – The condition name that is used to toggle sending next row.

  • num_batch (int) – the number of batches without blocking at the start of each epoch.

  • callback (function) – The callback funciton that will be invoked when sync_update is called.

Returns

SyncWaitDataset, dataset added a blocking condition.

Raises

RuntimeError – If condition name already exists.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object.
>>> data = data.sync_wait("callback1")
>>> data = data.batch(batch_size)
>>> for batch_data in data.create_dict_iterator():
>>>     data = data.sync_update("callback1")
take(count=- 1)

Takes at most given numbers of elements from the dataset.

Note

  1. If count is greater than the number of elements in the dataset or equal to -1, all the elements in dataset will be taken.

  2. The order of using take and batch matters. If take is before batch operation, then take given number of rows; otherwise take given number of batches.

Parameters

count (int, optional) – Number of elements to be taken from the dataset (default=-1).

Returns

TakeDataset, dataset taken.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # data is an instance of Dataset object.
>>> # Create a dataset where the dataset includes 50 elements.
>>> data = data.take(50)
to_device(send_epoch_end=True, create_data_info_queue=False)

Transfer data through CPU, GPU or Ascend devices.

Parameters
  • send_epoch_end (bool, optional) – Whether to send end of sequence to device or not (default=True).

  • create_data_info_queue (bool, optional) – Whether to create queue which stores types and shapes of data or not(default=False).

Note

If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M.

Returns

TransferDataset, dataset for transferring.

Raises

RuntimeError – If distribution file path is given but failed to read.

use_sampler(new_sampler)

Will make the current dataset use the new_sampler provided.

Parameters

new_sampler (Sampler) – The sampler to use for the current dataset.

Examples

>>> import mindspore.dataset as ds
>>>
>>> dataset_dir = "/path/to/imagefolder_directory"
>>> # Note: A SequentialSampler is created by default
>>> data = ds.ImageFolderDataset(dataset_dir)
>>>
>>> # Use a DistributedSampler instead of the SequentialSampler
>>> new_sampler = ds.DistributedSampler(10, 2)
>>> data.use_sampler(new_sampler)
zip(datasets)

Zip the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

Parameters

datasets (Union[tuple, class Dataset]) – A tuple of datasets or a single class Dataset to be zipped together with this dataset.

Returns

ZipDataset, dataset zipped.

Examples

>>> import mindspore.dataset as ds
>>>
>>> # ds1 and ds2 are instances of Dataset object
>>> # Create a dataset which is the combination of ds1 and ds2
>>> data = ds1.zip(ds2)