Shortcuts

MelSpectrogram

class torchaudio.transforms.MelSpectrogram(sample_rate: int = 16000, n_fft: int = 400, win_length: ~typing.Optional[int] = None, hop_length: ~typing.Optional[int] = None, f_min: float = 0.0, f_max: ~typing.Optional[float] = None, pad: int = 0, n_mels: int = 128, window_fn: ~typing.Callable[[...], ~torch.Tensor] = <built-in method hann_window of type object>, power: float = 2.0, normalized: bool = False, wkwargs: ~typing.Optional[dict] = None, center: bool = True, pad_mode: str = 'reflect', onesided: ~typing.Optional[bool] = None, norm: ~typing.Optional[str] = None, mel_scale: str = 'htk')[source]

Create MelSpectrogram for a raw audio signal.

This feature supports the following devices: CPU, CUDA This API supports the following properties: Autograd, TorchScript

This is a composition of torchaudio.transforms.Spectrogram() and torchaudio.transforms.MelScale().

Sources
Parameters:
  • sample_rate (int, optional) – Sample rate of audio signal. (Default: 16000)

  • n_fft (int, optional) – Size of FFT, creates n_fft // 2 + 1 bins. (Default: 400)

  • win_length (int or None, optional) – Window size. (Default: n_fft)

  • hop_length (int or None, optional) – Length of hop between STFT windows. (Default: win_length // 2)

  • f_min (float, optional) – Minimum frequency. (Default: 0.)

  • f_max (float or None, optional) – Maximum frequency. (Default: None)

  • pad (int, optional) – Two sided padding of signal. (Default: 0)

  • n_mels (int, optional) – Number of mel filterbanks. (Default: 128)

  • window_fn (Callable[..., Tensor], optional) – A function to create a window tensor that is applied/multiplied to each frame/window. (Default: torch.hann_window)

  • power (float, optional) – Exponent for the magnitude spectrogram, (must be > 0) e.g., 1 for magnitude, 2 for power, etc. (Default: 2)

  • normalized (bool, optional) – Whether to normalize by magnitude after stft. (Default: False)

  • wkwargs (Dict[..., ...] or None, optional) – Arguments for window function. (Default: None)

  • center (bool, optional) – whether to pad waveform on both sides so that the \(t\)-th frame is centered at time \(t \times \text{hop\_length}\). (Default: True)

  • pad_mode (string, optional) – controls the padding method used when center is True. (Default: "reflect")

  • onesided – Deprecated and unused.

  • norm (str or None, optional) – If “slaney”, divide the triangular mel weights by the width of the mel band (area normalization). (Default: None)

  • mel_scale (str, optional) – Scale to use: htk or slaney. (Default: htk)

Example
>>> waveform, sample_rate = torchaudio.load("test.wav", normalize=True)
>>> transform = transforms.MelSpectrogram(sample_rate)
>>> mel_specgram = transform(waveform)  # (channel, n_mels, time)

See also

torchaudio.functional.melscale_fbanks() - The function used to generate the filter banks.

Tutorials using MelSpectrogram:
Audio Feature Extractions

Audio Feature Extractions

Audio Feature Extractions
forward(waveform: Tensor) Tensor[source]
Parameters:

waveform (Tensor) – Tensor of audio of dimension (…, time).

Returns:

Mel frequency spectrogram of size (…, n_mels, time).

Return type:

Tensor

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources