.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "tutorials/speech_recognition_pipeline_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_tutorials_speech_recognition_pipeline_tutorial.py: Speech Recognition with Wav2Vec2 ================================ **Author**: `Moto Hira `__ This tutorial shows how to perform speech recognition using using pre-trained models from wav2vec 2.0 [`paper `__]. .. GENERATED FROM PYTHON SOURCE LINES 15-31 Overview -------- The process of speech recognition looks like the following. 1. Extract the acoustic features from audio waveform 2. Estimate the class of the acoustic features frame-by-frame 3. Generate hypothesis from the sequence of the class probabilities Torchaudio provides easy access to the pre-trained weights and associated information, such as the expected sample rate and class labels. They are bundled together and available under :py:mod:`torchaudio.pipelines` module. .. GENERATED FROM PYTHON SOURCE LINES 34-37 Preparation ----------- .. GENERATED FROM PYTHON SOURCE LINES 37-49 .. code-block:: default import torch import torchaudio print(torch.__version__) print(torchaudio.__version__) torch.random.manual_seed(0) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(device) .. rst-class:: sphx-glr-script-out .. code-block:: none 2.4.0.dev20240416 2.2.0.dev20240418 cuda .. GENERATED FROM PYTHON SOURCE LINES 51-59 .. code-block:: default import IPython import matplotlib.pyplot as plt from torchaudio.utils import download_asset SPEECH_FILE = download_asset("tutorial-assets/Lab41-SRI-VOiCES-src-sp0307-ch127535-sg0042.wav") .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0.00/106k [00:00 .. GENERATED FROM PYTHON SOURCE LINES 105-112 Loading data ------------ We will use the speech data from `VOiCES dataset `__, which is licensed under Creative Commos BY 4.0. .. GENERATED FROM PYTHON SOURCE LINES 112-116 .. code-block:: default IPython.display.Audio(SPEECH_FILE) .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 117-128 To load data, we use :py:func:`torchaudio.load`. If the sampling rate is different from what the pipeline expects, then we can use :py:func:`torchaudio.functional.resample` for resampling. .. note:: - :py:func:`torchaudio.functional.resample` works on CUDA tensors as well. - When performing resampling multiple times on the same set of sample rates, using :py:class:`torchaudio.transforms.Resample` might improve the performace. .. GENERATED FROM PYTHON SOURCE LINES 128-136 .. code-block:: default waveform, sample_rate = torchaudio.load(SPEECH_FILE) waveform = waveform.to(device) if sample_rate != bundle.sample_rate: waveform = torchaudio.functional.resample(waveform, sample_rate, bundle.sample_rate) .. GENERATED FROM PYTHON SOURCE LINES 137-147 Extracting acoustic features ---------------------------- The next step is to extract acoustic features from the audio. .. note:: Wav2Vec2 models fine-tuned for ASR task can perform feature extraction and classification with one step, but for the sake of the tutorial, we also show how to perform feature extraction here. .. GENERATED FROM PYTHON SOURCE LINES 147-152 .. code-block:: default with torch.inference_mode(): features, _ = model.extract_features(waveform) .. GENERATED FROM PYTHON SOURCE LINES 153-156 The returned features is a list of tensors. Each tensor is the output of a transformer layer. .. GENERATED FROM PYTHON SOURCE LINES 156-166 .. code-block:: default fig, ax = plt.subplots(len(features), 1, figsize=(16, 4.3 * len(features))) for i, feats in enumerate(features): ax[i].imshow(feats[0].cpu(), interpolation="nearest") ax[i].set_title(f"Feature from transformer layer {i+1}") ax[i].set_xlabel("Feature dimension") ax[i].set_ylabel("Frame (time-axis)") fig.tight_layout() .. image-sg:: /tutorials/images/sphx_glr_speech_recognition_pipeline_tutorial_001.png :alt: Feature from transformer layer 1, Feature from transformer layer 2, Feature from transformer layer 3, Feature from transformer layer 4, Feature from transformer layer 5, Feature from transformer layer 6, Feature from transformer layer 7, Feature from transformer layer 8, Feature from transformer layer 9, Feature from transformer layer 10, Feature from transformer layer 11, Feature from transformer layer 12 :srcset: /tutorials/images/sphx_glr_speech_recognition_pipeline_tutorial_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 167-176 Feature classification ---------------------- Once the acoustic features are extracted, the next step is to classify them into a set of categories. Wav2Vec2 model provides method to perform the feature extraction and classification in one step. .. GENERATED FROM PYTHON SOURCE LINES 176-181 .. code-block:: default with torch.inference_mode(): emission, _ = model(waveform) .. GENERATED FROM PYTHON SOURCE LINES 182-187 The output is in the form of logits. It is not in the form of probability. Let’s visualize this. .. GENERATED FROM PYTHON SOURCE LINES 187-196 .. code-block:: default plt.imshow(emission[0].cpu().T, interpolation="nearest") plt.title("Classification result") plt.xlabel("Frame (time-axis)") plt.ylabel("Class") plt.tight_layout() print("Class labels:", bundle.get_labels()) .. image-sg:: /tutorials/images/sphx_glr_speech_recognition_pipeline_tutorial_002.png :alt: Classification result :srcset: /tutorials/images/sphx_glr_speech_recognition_pipeline_tutorial_002.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none Class labels: ('-', '|', 'E', 'T', 'A', 'O', 'N', 'I', 'H', 'S', 'R', 'D', 'L', 'U', 'M', 'W', 'C', 'F', 'G', 'Y', 'P', 'B', 'V', 'K', "'", 'X', 'J', 'Q', 'Z') .. GENERATED FROM PYTHON SOURCE LINES 197-200 We can see that there are strong indications to certain labels across the time line. .. GENERATED FROM PYTHON SOURCE LINES 203-231 Generating transcripts ---------------------- From the sequence of label probabilities, now we want to generate transcripts. The process to generate hypotheses is often called “decoding”. Decoding is more elaborate than simple classification because decoding at certain time step can be affected by surrounding observations. For example, take a word like ``night`` and ``knight``. Even if their prior probability distribution are differnt (in typical conversations, ``night`` would occur way more often than ``knight``), to accurately generate transcripts with ``knight``, such as ``a knight with a sword``, the decoding process has to postpone the final decision until it sees enough context. There are many decoding techniques proposed, and they require external resources, such as word dictionary and language models. In this tutorial, for the sake of simplicity, we will perform greedy decoding which does not depend on such external components, and simply pick up the best hypothesis at each time step. Therefore, the context information are not used, and only one transcript can be generated. We start by defining greedy decoding algorithm. .. GENERATED FROM PYTHON SOURCE LINES 231-253 .. code-block:: default class GreedyCTCDecoder(torch.nn.Module): def __init__(self, labels, blank=0): super().__init__() self.labels = labels self.blank = blank def forward(self, emission: torch.Tensor) -> str: """Given a sequence emission over labels, get the best path string Args: emission (Tensor): Logit tensors. Shape `[num_seq, num_label]`. Returns: str: The resulting transcript """ indices = torch.argmax(emission, dim=-1) # [num_seq,] indices = torch.unique_consecutive(indices, dim=-1) indices = [i for i in indices if i != self.blank] return "".join([self.labels[i] for i in indices]) .. GENERATED FROM PYTHON SOURCE LINES 254-256 Now create the decoder object and decode the transcript. .. GENERATED FROM PYTHON SOURCE LINES 256-261 .. code-block:: default decoder = GreedyCTCDecoder(labels=bundle.get_labels()) transcript = decoder(emission[0]) .. GENERATED FROM PYTHON SOURCE LINES 262-264 Let’s check the result and listen again to the audio. .. GENERATED FROM PYTHON SOURCE LINES 264-269 .. code-block:: default print(transcript) IPython.display.Audio(SPEECH_FILE) .. rst-class:: sphx-glr-script-out .. code-block:: none I|HAD|THAT|CURIOSITY|BESIDE|ME|AT|THIS|MOMENT| .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 270-276 The ASR model is fine-tuned using a loss function called Connectionist Temporal Classification (CTC). The detail of CTC loss is explained `here `__. In CTC a blank token (ϵ) is a special token which represents a repetition of the previous symbol. In decoding, these are simply ignored. .. GENERATED FROM PYTHON SOURCE LINES 279-291 Conclusion ---------- In this tutorial, we looked at how to use :py:class:`~torchaudio.pipelines.Wav2Vec2ASRBundle` to perform acoustic feature extraction and speech recognition. Constructing a model and getting the emission is as short as two lines. :: model = torchaudio.pipelines.WAV2VEC2_ASR_BASE_960H.get_model() emission = model(waveforms, ...) .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 7.455 seconds) .. _sphx_glr_download_tutorials_speech_recognition_pipeline_tutorial.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: speech_recognition_pipeline_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: speech_recognition_pipeline_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_