mindspore.dataset.audio.LFCC

class mindspore.dataset.audio.LFCC(sample_rate=16000, n_filter=128, n_lfcc=40, f_min=0.0, f_max=None, dct_type=2, norm=NormMode.ORTHO, log_lf=False, speckwargs=None)[source]

Create LFCC for a raw audio signal.

Note

The shape of the audio waveform to be processed needs to be <…, time>.

Parameters
  • sample_rate (int, optional) – Sample rate of audio signal. Default: 16000.

  • n_filter (int, optional) – Number of linear filters to apply. Default: 128.

  • n_lfcc (int, optional) – Number of lfc coefficients to retain. Default: 40.

  • f_min (float, optional) – Minimum frequency. Default: 0.0.

  • f_max (float, optional) – Maximum frequency. Default: None, will be set to sample_rate // 2 .

  • dct_type (int, optional) – Type of DCT to use. The value can only be 2. Default: 2.

  • norm (NormMode, optional) – Norm to use. Default: NormMode.ORTHO.

  • log_lf (bool, optional) – Whether to use log-lf spectrograms instead of db-scaled. Default: False.

  • speckwargs (dict, optional) –

    Arguments for mindspore.dataset.audio.Spectrogram. Default: None, the default setting is a dict including

    • ’n_fft’: 400

    • ’win_length’: n_fft

    • ’hop_length’: win_length // 2

    • ’pad’: 0

    • ’window’: WindowType.HANN

    • ’power’: 2.0

    • ’normalized’: False

    • ’center’: True

    • ’pad_mode’: BorderType.REFLECT

    • ’onesided’: True

Raises
Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> import mindspore.dataset.audio as audio
>>>
>>> waveform = np.random.random([1, 1, 300])
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"])
>>> transforms = [audio.LFCC()]
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])