mindspore.dataset.audio.MelScale

View Source On Gitee
class mindspore.dataset.audio.MelScale(n_mels=128, sample_rate=16000, f_min=0.0, f_max=None, n_stft=201, norm=NormType.NONE, mel_type=MelType.HTK)[source]

Convert normal STFT to STFT at the Mel scale.

Parameters
  • n_mels (int, optional) – Number of mel filterbanks. Default: 128.

  • sample_rate (int, optional) – Sample rate of audio signal. Default: 16000.

  • f_min (float, optional) – Minimum frequency. Default: 0.0.

  • f_max (float, optional) – Maximum frequency. Default: None, will be set to sample_rate // 2 .

  • n_stft (int, optional) – Number of bins in STFT. Default: 201.

  • norm (NormType, optional) – Type of norm, value should be NormType.SLANEY or NormType.NONE. If norm is NormType.SLANEY, divide the triangular mel weight by the width of the mel band. Default: NormType.NONE, no narmalization.

  • mel_type (MelType, optional) – Type to use, value should be MelType.SLANEY or MelType.HTK. Default: MelType.HTK.

Raises
Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> import mindspore.dataset.audio as audio
>>>
>>> # Use the transform in dataset pipeline mode
>>> waveform = np.random.random([5, 201, 3])  # 5 samples
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"])
>>> transforms = [audio.MelScale(200, 1500, 0.7)]
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])
>>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     print(item["audio"].shape, item["audio"].dtype)
...     break
(200, 3) float64
>>>
>>> # Use the transform in eager mode
>>> waveform = np.random.random([201, 3])  # 1 sample
>>> output = audio.MelScale(200, 1500, 0.7)(waveform)
>>> print(output.shape, output.dtype)
(200, 3) float64
Tutorial Examples: