.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "tutorials/speech_recognition_pipeline_tutorial.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_tutorials_speech_recognition_pipeline_tutorial.py: Speech Recognition with Wav2Vec2 ================================ **Author**: `Moto Hira `__ This tutorial shows how to perform speech recognition using using pre-trained models from wav2vec 2.0 [`paper `__]. .. GENERATED FROM PYTHON SOURCE LINES 15-31 Overview -------- The process of speech recognition looks like the following. 1. Extract the acoustic features from audio waveform 2. Estimate the class of the acoustic features frame-by-frame 3. Generate hypothesis from the sequence of the class probabilities Torchaudio provides easy access to the pre-trained weights and associated information, such as the expected sample rate and class labels. They are bundled together and available under :py:func:`torchaudio.pipelines` module. .. GENERATED FROM PYTHON SOURCE LINES 34-39 Preparation ----------- First we import the necessary packages, and fetch data that we work on. .. GENERATED FROM PYTHON SOURCE LINES 39-69 .. code-block:: default # %matplotlib inline import os import IPython import matplotlib import matplotlib.pyplot as plt import requests import torch import torchaudio matplotlib.rcParams["figure.figsize"] = [16.0, 4.8] torch.random.manual_seed(0) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(torch.__version__) print(torchaudio.__version__) print(device) SPEECH_URL = "https://pytorch-tutorial-assets.s3.amazonaws.com/VOiCES_devkit/source-16k/train/sp0307/Lab41-SRI-VOiCES-src-sp0307-ch127535-sg0042.wav" # noqa: E501 SPEECH_FILE = "_assets/speech.wav" if not os.path.exists(SPEECH_FILE): os.makedirs("_assets", exist_ok=True) with open(SPEECH_FILE, "wb") as file: file.write(requests.get(SPEECH_URL).content) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none 1.12.0 0.12.0 cpu .. GENERATED FROM PYTHON SOURCE LINES 70-97 Creating a pipeline ------------------- First, we will create a Wav2Vec2 model that performs the feature extraction and the classification. There are two types of Wav2Vec2 pre-trained weights available in torchaudio. The ones fine-tuned for ASR task, and the ones not fine-tuned. Wav2Vec2 (and HuBERT) models are trained in self-supervised manner. They are firstly trained with audio only for representation learning, then fine-tuned for a specific task with additional labels. The pre-trained weights without fine-tuning can be fine-tuned for other downstream tasks as well, but this tutorial does not cover that. We will use :py:func:`torchaudio.pipelines.WAV2VEC2_ASR_BASE_960H` here. There are multiple models available as :py:mod:`torchaudio.pipelines`. Please check the documentation for the detail of how they are trained. The bundle object provides the interface to instantiate model and other information. Sampling rate and the class labels are found as follow. .. GENERATED FROM PYTHON SOURCE LINES 97-105 .. code-block:: default bundle = torchaudio.pipelines.WAV2VEC2_ASR_BASE_960H print("Sample Rate:", bundle.sample_rate) print("Labels:", bundle.get_labels()) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Sample Rate: 16000 Labels: ('-', '|', 'E', 'T', 'A', 'O', 'N', 'I', 'H', 'S', 'R', 'D', 'L', 'U', 'M', 'W', 'C', 'F', 'G', 'Y', 'P', 'B', 'V', 'K', "'", 'X', 'J', 'Q', 'Z') .. GENERATED FROM PYTHON SOURCE LINES 106-109 Model can be constructed as following. This process will automatically fetch the pre-trained weights and load it into the model. .. GENERATED FROM PYTHON SOURCE LINES 109-115 .. code-block:: default model = bundle.get_model().to(device) print(model.__class__) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Downloading: "https://download.pytorch.org/torchaudio/models/wav2vec2_fairseq_base_ls960_asr_ls960.pth" to /root/.cache/torch/hub/checkpoints/wav2vec2_fairseq_base_ls960_asr_ls960.pth 0%| | 0.00/360M [00:00 .. GENERATED FROM PYTHON SOURCE LINES 116-123 Loading data ------------ We will use the speech data from `VOiCES dataset `__, which is licensed under Creative Commos BY 4.0. .. GENERATED FROM PYTHON SOURCE LINES 123-127 .. code-block:: default IPython.display.Audio(SPEECH_FILE) .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 128-139 To load data, we use :py:func:`torchaudio.load`. If the sampling rate is different from what the pipeline expects, then we can use :py:func:`torchaudio.functional.resample` for resampling. .. note:: - :py:func:`torchaudio.functional.resample` works on CUDA tensors as well. - When performing resampling multiple times on the same set of sample rates, using :py:func:`torchaudio.transforms.Resample` might improve the performace. .. GENERATED FROM PYTHON SOURCE LINES 139-147 .. code-block:: default waveform, sample_rate = torchaudio.load(SPEECH_FILE) waveform = waveform.to(device) if sample_rate != bundle.sample_rate: waveform = torchaudio.functional.resample(waveform, sample_rate, bundle.sample_rate) .. GENERATED FROM PYTHON SOURCE LINES 148-158 Extracting acoustic features ---------------------------- The next step is to extract acoustic features from the audio. .. note:: Wav2Vec2 models fine-tuned for ASR task can perform feature extraction and classification with one step, but for the sake of the tutorial, we also show how to perform feature extraction here. .. GENERATED FROM PYTHON SOURCE LINES 158-163 .. code-block:: default with torch.inference_mode(): features, _ = model.extract_features(waveform) .. GENERATED FROM PYTHON SOURCE LINES 164-167 The returned features is a list of tensors. Each tensor is the output of a transformer layer. .. GENERATED FROM PYTHON SOURCE LINES 167-178 .. code-block:: default fig, ax = plt.subplots(len(features), 1, figsize=(16, 4.3 * len(features))) for i, feats in enumerate(features): ax[i].imshow(feats[0].cpu()) ax[i].set_title(f"Feature from transformer layer {i+1}") ax[i].set_xlabel("Feature dimension") ax[i].set_ylabel("Frame (time-axis)") plt.tight_layout() plt.show() .. image-sg:: /tutorials/images/sphx_glr_speech_recognition_pipeline_tutorial_001.png :alt: Feature from transformer layer 1, Feature from transformer layer 2, Feature from transformer layer 3, Feature from transformer layer 4, Feature from transformer layer 5, Feature from transformer layer 6, Feature from transformer layer 7, Feature from transformer layer 8, Feature from transformer layer 9, Feature from transformer layer 10, Feature from transformer layer 11, Feature from transformer layer 12 :srcset: /tutorials/images/sphx_glr_speech_recognition_pipeline_tutorial_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 179-188 Feature classification ---------------------- Once the acoustic features are extracted, the next step is to classify them into a set of categories. Wav2Vec2 model provides method to perform the feature extraction and classification in one step. .. GENERATED FROM PYTHON SOURCE LINES 188-193 .. code-block:: default with torch.inference_mode(): emission, _ = model(waveform) .. GENERATED FROM PYTHON SOURCE LINES 194-199 The output is in the form of logits. It is not in the form of probability. Let’s visualize this. .. GENERATED FROM PYTHON SOURCE LINES 199-208 .. code-block:: default plt.imshow(emission[0].cpu().T) plt.title("Classification result") plt.xlabel("Frame (time-axis)") plt.ylabel("Class") plt.show() print("Class labels:", bundle.get_labels()) .. image-sg:: /tutorials/images/sphx_glr_speech_recognition_pipeline_tutorial_002.png :alt: Classification result :srcset: /tutorials/images/sphx_glr_speech_recognition_pipeline_tutorial_002.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Class labels: ('-', '|', 'E', 'T', 'A', 'O', 'N', 'I', 'H', 'S', 'R', 'D', 'L', 'U', 'M', 'W', 'C', 'F', 'G', 'Y', 'P', 'B', 'V', 'K', "'", 'X', 'J', 'Q', 'Z') .. GENERATED FROM PYTHON SOURCE LINES 209-212 We can see that there are strong indications to certain labels across the time line. .. GENERATED FROM PYTHON SOURCE LINES 215-243 Generating transcripts ---------------------- From the sequence of label probabilities, now we want to generate transcripts. The process to generate hypotheses is often called “decoding”. Decoding is more elaborate than simple classification because decoding at certain time step can be affected by surrounding observations. For example, take a word like ``night`` and ``knight``. Even if their prior probability distribution are differnt (in typical conversations, ``night`` would occur way more often than ``knight``), to accurately generate transcripts with ``knight``, such as ``a knight with a sword``, the decoding process has to postpone the final decision until it sees enough context. There are many decoding techniques proposed, and they require external resources, such as word dictionary and language models. In this tutorial, for the sake of simplicity, we will perform greedy decoding which does not depend on such external components, and simply pick up the best hypothesis at each time step. Therefore, the context information are not used, and only one transcript can be generated. We start by defining greedy decoding algorithm. .. GENERATED FROM PYTHON SOURCE LINES 243-265 .. code-block:: default class GreedyCTCDecoder(torch.nn.Module): def __init__(self, labels, blank=0): super().__init__() self.labels = labels self.blank = blank def forward(self, emission: torch.Tensor) -> str: """Given a sequence emission over labels, get the best path string Args: emission (Tensor): Logit tensors. Shape `[num_seq, num_label]`. Returns: str: The resulting transcript """ indices = torch.argmax(emission, dim=-1) # [num_seq,] indices = torch.unique_consecutive(indices, dim=-1) indices = [i for i in indices if i != self.blank] return "".join([self.labels[i] for i in indices]) .. GENERATED FROM PYTHON SOURCE LINES 266-268 Now create the decoder object and decode the transcript. .. GENERATED FROM PYTHON SOURCE LINES 268-273 .. code-block:: default decoder = GreedyCTCDecoder(labels=bundle.get_labels()) transcript = decoder(emission[0]) .. GENERATED FROM PYTHON SOURCE LINES 274-276 Let’s check the result and listen again to the audio. .. GENERATED FROM PYTHON SOURCE LINES 276-281 .. code-block:: default print(transcript) IPython.display.Audio(SPEECH_FILE) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none I|HAD|THAT|CURIOSITY|BESIDE|ME|AT|THIS|MOMENT| .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 282-288 The ASR model is fine-tuned using a loss function called Connectionist Temporal Classification (CTC). The detail of CTC loss is explained `here `__. In CTC a blank token (ϵ) is a special token which represents a repetition of the previous symbol. In decoding, these are simply ignored. .. GENERATED FROM PYTHON SOURCE LINES 291-303 Conclusion ---------- In this tutorial, we looked at how to use :py:mod:`torchaudio.pipelines` to perform acoustic feature extraction and speech recognition. Constructing a model and getting the emission is as short as two lines. :: model = torchaudio.pipelines.WAV2VEC2_ASR_BASE_960H.get_model() emission = model(waveforms, ...) .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 0 minutes 10.757 seconds) .. _sphx_glr_download_tutorials_speech_recognition_pipeline_tutorial.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: speech_recognition_pipeline_tutorial.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: speech_recognition_pipeline_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_