mindspore.dataset.audio.AmplitudeToDB

View Source On Gitee
class mindspore.dataset.audio.AmplitudeToDB(stype=ScaleType.POWER, ref_value=1.0, amin=1e-10, top_db=80.0)[source]

Turn the input audio waveform from the amplitude/power scale to decibel scale.

Note

The shape of the audio waveform to be processed needs to be <…, freq, time>.

Parameters
  • stype (ScaleType, optional) – Scale of the input waveform, which can be ScaleType.POWER or ScaleType.MAGNITUDE. Default: ScaleType.POWER.

  • ref_value (float, optional) –

    Multiplier reference value for generating db_multiplier . Default: 1.0. The formula is

    \(\text{db_multiplier} = \log10(\max(\text{ref_value}, amin))\) .

  • amin (float, optional) – Lower bound to clamp the input waveform, which must be greater than zero. Default: 1e-10.

  • top_db (float, optional) – Minimum cut-off decibels, which must be non-negative. Default: 80.0.

Raises
Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> import mindspore.dataset.audio as audio
>>>
>>> # Use the transform in dataset pipeline mode
>>> waveform = np.random.random([5, 400 // 2 + 1, 30])  # 5 samples
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"])
>>> transforms = [audio.AmplitudeToDB(stype=audio.ScaleType.POWER)]
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])
>>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     print(item["audio"].shape, item["audio"].dtype)
...     break
(201, 30) float64
>>>
>>> # Use the transform in eager mode
>>> waveform = np.random.random([400 // 2 + 1, 30])  # 1 sample
>>> output = audio.AmplitudeToDB(stype=audio.ScaleType.POWER)(waveform)
>>> print(output.shape, output.dtype)
(201, 30) float64
Tutorial Examples: