.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "tutorials/mvdr_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_tutorials_mvdr_tutorial.py: Speech Enhancement with MVDR Beamforming ======================================== **Author**: `Zhaoheng Ni `__ .. GENERATED FROM PYTHON SOURCE LINES 11-31 1. Overview ----------- This is a tutorial on applying Minimum Variance Distortionless Response (MVDR) beamforming to estimate enhanced speech with TorchAudio. Steps: - Generate an ideal ratio mask (IRM) by dividing the clean/noise magnitude by the mixture magnitude. - Estimate power spectral density (PSD) matrices using :py:func:`torchaudio.transforms.PSD`. - Estimate enhanced speech using MVDR modules (:py:func:`torchaudio.transforms.SoudenMVDR` and :py:func:`torchaudio.transforms.RTFMVDR`). - Benchmark the two methods (:py:func:`torchaudio.functional.rtf_evd` and :py:func:`torchaudio.functional.rtf_power`) for computing the relative transfer function (RTF) matrix of the reference microphone. .. GENERATED FROM PYTHON SOURCE LINES 31-44 .. code-block:: default import torch import torchaudio import torchaudio.functional as F print(torch.__version__) print(torchaudio.__version__) import matplotlib.pyplot as plt import mir_eval from IPython.display import Audio .. rst-class:: sphx-glr-script-out .. code-block:: none 2.4.0.dev20240328 2.2.0.dev20240329 .. GENERATED FROM PYTHON SOURCE LINES 45-48 2. Preparation -------------- .. GENERATED FROM PYTHON SOURCE LINES 50-58 2.1. Import the packages ~~~~~~~~~~~~~~~~~~~~~~~~ First, we install and import the necessary packages. ``mir_eval``, ``pesq``, and ``pystoi`` packages are required for evaluating the speech enhancement performance. .. GENERATED FROM PYTHON SOURCE LINES 58-68 .. code-block:: default # When running this example in notebook, install the following packages. # !pip3 install mir_eval # !pip3 install pesq # !pip3 install pystoi from pesq import pesq from pystoi import stoi from torchaudio.utils import download_asset .. GENERATED FROM PYTHON SOURCE LINES 69-89 2.2. Download audio data ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The multi-channel audio example is selected from `ConferencingSpeech `__ dataset. The original filename is ``SSB07200001\#noise-sound-bible-0038\#7.86_6.16_3.00_3.14_4.84_134.5285_191.7899_0.4735\#15217\#25.16333303751458\#0.2101221178590021.wav`` which was generated with: - ``SSB07200001.wav`` from `AISHELL-3 `__ (Apache License v.2.0) - ``noise-sound-bible-0038.wav`` from `MUSAN `__ (Attribution 4.0 International — CC BY 4.0) .. GENERATED FROM PYTHON SOURCE LINES 89-95 .. code-block:: default SAMPLE_RATE = 16000 SAMPLE_CLEAN = download_asset("tutorial-assets/mvdr/clean_speech.wav") SAMPLE_NOISE = download_asset("tutorial-assets/mvdr/noise.wav") .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0.00/0.98M [00:00

.. GENERATED FROM PYTHON SOURCE LINES 230-233 3.2.2. Visualize clean speech ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 233-238 .. code-block:: default plot_spectrogram(stft_clean[0], "Spectrogram of Clean Speech (dB)") Audio(waveform_clean[0], rate=SAMPLE_RATE) .. image-sg:: /tutorials/images/sphx_glr_mvdr_tutorial_002.png :alt: Spectrogram of Clean Speech (dB) :srcset: /tutorials/images/sphx_glr_mvdr_tutorial_002.png :class: sphx-glr-single-img .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 239-242 3.2.3. Visualize noise ^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 242-247 .. code-block:: default plot_spectrogram(stft_noise[0], "Spectrogram of Noise (dB)") Audio(waveform_noise[0], rate=SAMPLE_RATE) .. image-sg:: /tutorials/images/sphx_glr_mvdr_tutorial_003.png :alt: Spectrogram of Noise (dB) :srcset: /tutorials/images/sphx_glr_mvdr_tutorial_003.png :class: sphx-glr-single-img .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 248-256 3.3. Define the reference microphone ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We choose the first microphone in the array as the reference channel for demonstration. The selection of the reference channel may depend on the design of the microphone array. You can also apply an end-to-end neural network which estimates both the reference channel and the PSD matrices, then obtains the enhanced STFT coefficients by the MVDR module. .. GENERATED FROM PYTHON SOURCE LINES 256-260 .. code-block:: default REFERENCE_CHANNEL = 0 .. GENERATED FROM PYTHON SOURCE LINES 261-264 3.4. Compute IRMs ~~~~~~~~~~~~~~~~~ .. GENERATED FROM PYTHON SOURCE LINES 264-277 .. code-block:: default def get_irms(stft_clean, stft_noise): mag_clean = stft_clean.abs() ** 2 mag_noise = stft_noise.abs() ** 2 irm_speech = mag_clean / (mag_clean + mag_noise) irm_noise = mag_noise / (mag_clean + mag_noise) return irm_speech[REFERENCE_CHANNEL], irm_noise[REFERENCE_CHANNEL] irm_speech, irm_noise = get_irms(stft_clean, stft_noise) .. GENERATED FROM PYTHON SOURCE LINES 278-281 3.4.1. Visualize IRM of target speech ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 281-285 .. code-block:: default plot_mask(irm_speech, "IRM of the Target Speech") .. image-sg:: /tutorials/images/sphx_glr_mvdr_tutorial_004.png :alt: IRM of the Target Speech :srcset: /tutorials/images/sphx_glr_mvdr_tutorial_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 286-289 3.4.2. Visualize IRM of noise ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 289-292 .. code-block:: default plot_mask(irm_noise, "IRM of the Noise") .. image-sg:: /tutorials/images/sphx_glr_mvdr_tutorial_005.png :alt: IRM of the Noise :srcset: /tutorials/images/sphx_glr_mvdr_tutorial_005.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 293-301 4. Compute PSD matrices ----------------------- :py:func:`torchaudio.transforms.PSD` computes the time-invariant PSD matrix given the multi-channel complex-valued STFT coefficients of the mixture speech and the time-frequency mask. The shape of the PSD matrix is `(..., freq, channel, channel)`. .. GENERATED FROM PYTHON SOURCE LINES 301-308 .. code-block:: default psd_transform = torchaudio.transforms.PSD() psd_speech = psd_transform(stft_mix, irm_speech) psd_noise = psd_transform(stft_mix, irm_noise) .. GENERATED FROM PYTHON SOURCE LINES 309-312 5. Beamforming using SoudenMVDR ------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 315-325 5.1. Apply beamforming ~~~~~~~~~~~~~~~~~~~~~~ :py:func:`torchaudio.transforms.SoudenMVDR` takes the multi-channel complexed-valued STFT coefficients of the mixture speech, PSD matrices of target speech and noise, and the reference channel inputs. The output is a single-channel complex-valued STFT coefficients of the enhanced speech. We can then obtain the enhanced waveform by passing this output to the :py:func:`torchaudio.transforms.InverseSpectrogram` module. .. GENERATED FROM PYTHON SOURCE LINES 325-331 .. code-block:: default mvdr_transform = torchaudio.transforms.SoudenMVDR() stft_souden = mvdr_transform(stft_mix, psd_speech, psd_noise, reference_channel=REFERENCE_CHANNEL) waveform_souden = istft(stft_souden, length=waveform_mix.shape[-1]) .. GENERATED FROM PYTHON SOURCE LINES 332-335 5.2. Result for SoudenMVDR ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. GENERATED FROM PYTHON SOURCE LINES 335-342 .. code-block:: default plot_spectrogram(stft_souden, "Enhanced Spectrogram by SoudenMVDR (dB)") waveform_souden = waveform_souden.reshape(1, -1) evaluate(waveform_souden, waveform_clean[0:1]) Audio(waveform_souden, rate=SAMPLE_RATE) .. image-sg:: /tutorials/images/sphx_glr_mvdr_tutorial_006.png :alt: Enhanced Spectrogram by SoudenMVDR (dB) :srcset: /tutorials/images/sphx_glr_mvdr_tutorial_006.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none SDR score: 17.946234447508765 Si-SNR score: 12.215202612266587 PESQ score: 3.3447437286376953 STOI score: 0.8712864479161743 .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 343-346 6. Beamforming using RTFMVDR ---------------------------- .. GENERATED FROM PYTHON SOURCE LINES 349-361 6.1. Compute RTF ~~~~~~~~~~~~~~~~ TorchAudio offers two methods for computing the RTF matrix of a target speech: - :py:func:`torchaudio.functional.rtf_evd`, which applies eigenvalue decomposition to the PSD matrix of target speech to get the RTF matrix. - :py:func:`torchaudio.functional.rtf_power`, which applies the power iteration method. You can specify the number of iterations with argument ``n_iter``. .. GENERATED FROM PYTHON SOURCE LINES 361-366 .. code-block:: default rtf_evd = F.rtf_evd(psd_speech) rtf_power = F.rtf_power(psd_speech, psd_noise, reference_channel=REFERENCE_CHANNEL) .. GENERATED FROM PYTHON SOURCE LINES 367-377 6.2. Apply beamforming ~~~~~~~~~~~~~~~~~~~~~~ :py:func:`torchaudio.transforms.RTFMVDR` takes the multi-channel complexed-valued STFT coefficients of the mixture speech, RTF matrix of target speech, PSD matrix of noise, and the reference channel inputs. The output is a single-channel complex-valued STFT coefficients of the enhanced speech. We can then obtain the enhanced waveform by passing this output to the :py:func:`torchaudio.transforms.InverseSpectrogram` module. .. GENERATED FROM PYTHON SOURCE LINES 377-389 .. code-block:: default mvdr_transform = torchaudio.transforms.RTFMVDR() # compute the enhanced speech based on F.rtf_evd stft_rtf_evd = mvdr_transform(stft_mix, rtf_evd, psd_noise, reference_channel=REFERENCE_CHANNEL) waveform_rtf_evd = istft(stft_rtf_evd, length=waveform_mix.shape[-1]) # compute the enhanced speech based on F.rtf_power stft_rtf_power = mvdr_transform(stft_mix, rtf_power, psd_noise, reference_channel=REFERENCE_CHANNEL) waveform_rtf_power = istft(stft_rtf_power, length=waveform_mix.shape[-1]) .. GENERATED FROM PYTHON SOURCE LINES 390-393 6.3. Result for RTFMVDR with `rtf_evd` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. GENERATED FROM PYTHON SOURCE LINES 393-400 .. code-block:: default plot_spectrogram(stft_rtf_evd, "Enhanced Spectrogram by RTFMVDR and F.rtf_evd (dB)") waveform_rtf_evd = waveform_rtf_evd.reshape(1, -1) evaluate(waveform_rtf_evd, waveform_clean[0:1]) Audio(waveform_rtf_evd, rate=SAMPLE_RATE) .. image-sg:: /tutorials/images/sphx_glr_mvdr_tutorial_007.png :alt: Enhanced Spectrogram by RTFMVDR and F.rtf_evd (dB) :srcset: /tutorials/images/sphx_glr_mvdr_tutorial_007.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none SDR score: 11.880210635280273 Si-SNR score: 10.714419996128061 PESQ score: 3.083890914916992 STOI score: 0.8261544910053075 .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 401-404 6.4. Result for RTFMVDR with `rtf_power` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. GENERATED FROM PYTHON SOURCE LINES 404-409 .. code-block:: default plot_spectrogram(stft_rtf_power, "Enhanced Spectrogram by RTFMVDR and F.rtf_power (dB)") waveform_rtf_power = waveform_rtf_power.reshape(1, -1) evaluate(waveform_rtf_power, waveform_clean[0:1]) Audio(waveform_rtf_power, rate=SAMPLE_RATE) .. image-sg:: /tutorials/images/sphx_glr_mvdr_tutorial_008.png :alt: Enhanced Spectrogram by RTFMVDR and F.rtf_power (dB) :srcset: /tutorials/images/sphx_glr_mvdr_tutorial_008.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none SDR score: 15.424590276934103 Si-SNR score: 13.035440892133451 PESQ score: 3.487997531890869 STOI score: 0.8798278461896808 .. raw:: html


.. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 1.856 seconds) .. _sphx_glr_download_tutorials_mvdr_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: mvdr_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: mvdr_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_