mindspore.dataset.audio.DetectPitchFrequency

class mindspore.dataset.audio.DetectPitchFrequency(sample_rate, frame_time=0.01, win_length=30, freq_low=85, freq_high=3400)[source]

Detect pitch frequency.

It is implemented using normalized cross-correlation function and median smoothing.

Parameters
  • sample_rate (int) – Sampling rate of the waveform, e.g. 44100 (Hz), the value can’t be zero.

  • frame_time (float, optional) – Duration of a frame, the value must be greater than zero (default=0.01).

  • win_length (int, optional) – The window length for median smoothing (in number of frames), the value must be greater than zero (default=30).

  • freq_low (int, optional) – Lowest frequency that can be detected (Hz), the value must be greater than zero (default=85).

  • freq_high (int, optional) – Highest frequency that can be detected (Hz), the value must be greater than zero (default=3400).

Examples

>>> import numpy as np
>>>
>>> waveform = np.array([[0.716064e-03, 5.347656e-03, 6.246826e-03, 2.089477e-02, 7.138305e-02],
...                      [4.156616e-02, 1.394653e-02, 3.550292e-02, 0.614379e-02, 3.840209e-02]])
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"])
>>> transforms = [audio.DetectPitchFrequency(30, 0.1, 3, 5, 25)]
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])