• Docs >
  • Online ASR with Emformer RNN-T

Online ASR with Emformer RNN-T

Author: Jeff Hwang, Moto Hira

This tutorial shows how to use Emformer RNN-T and streaming API to perform online speech recognition.


This tutorial requires Streaming API, FFmpeg libraries (>=4.1, <5), and SentencePiece.

The Streaming API is available in nightly builds. Please refer to https://pytorch.org/get-started/locally/ for instructions.

There are multiple ways to install FFmpeg libraries. If you are using Anaconda Python distribution, conda install 'ffmpeg<5' will install the required FFmpeg libraries.

You can install SentencePiece by running pip install sentencepiece.

1. Overview

Performing online speech recognition is composed of the following steps

  1. Build the inference pipeline Emformer RNN-T is composed of three components: feature extractor, decoder and token processor.

  2. Format the waveform into chunks of expected sizes.

  3. Pass data through the pipeline.

2. Preparation

import IPython
import torch
import torchaudio

    from torchaudio.io import StreamReader
except ModuleNotFoundError:
        import google.colab

            To enable running this notebook in Google Colab, install nightly
            torch and torchaudio builds and the requisite third party libraries by
            adding the following code block to the top of the notebook before running it:

            !pip3 uninstall -y torch torchvision torchaudio
            !pip3 install --pre torch torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
            !pip3 install sentencepiece
            !add-apt-repository -y ppa:savoury1/ffmpeg4
            !apt-get -qq install -y ffmpeg
    except ModuleNotFoundError:




3. Construct the pipeline

Pre-trained model weights and related pipeline components are bundled as torchaudio.pipelines.RNNTBundle().

We use torchaudio.pipelines.EMFORMER_RNNT_BASE_LIBRISPEECH(), which is a Emformer RNN-T model trained on LibriSpeech dataset.

bundle = torchaudio.pipelines.EMFORMER_RNNT_BASE_LIBRISPEECH

feature_extractor = bundle.get_streaming_feature_extractor()
decoder = bundle.get_decoder()
token_processor = bundle.get_token_processor()


  0%|          | 0.00/3.81k [00:00<?, ?B/s]
100%|##########| 3.81k/3.81k [00:00<00:00, 1.90MB/s]

  0%|          | 0.00/293M [00:00<?, ?B/s]
  8%|8         | 24.7M/293M [00:00<00:01, 259MB/s]
 17%|#6        | 49.4M/293M [00:00<00:01, 216MB/s]
 26%|##5       | 75.0M/293M [00:00<00:00, 238MB/s]
 37%|###7      | 109M/293M [00:00<00:00, 282MB/s]
 48%|####8     | 141M/293M [00:00<00:00, 296MB/s]
 60%|######    | 177M/293M [00:00<00:00, 322MB/s]
 71%|#######1  | 208M/293M [00:00<00:00, 305MB/s]
 82%|########1 | 240M/293M [00:00<00:00, 313MB/s]
 92%|#########2| 270M/293M [00:00<00:00, 313MB/s]
100%|##########| 293M/293M [00:01<00:00, 291MB/s]

  0%|          | 0.00/295k [00:00<?, ?B/s]
100%|##########| 295k/295k [00:00<00:00, 82.5MB/s]

Streaming inference works on input data with overlap. Emformer RNN-T model treats the newest portion of the input data as the “right context” — a preview of future context. In each inference call, the model expects the main segment to start from this right context from the previous inference call. The following figure illustrates this.


The size of main segment and right context, along with the expected sample rate can be retrieved from bundle.

sample_rate = bundle.sample_rate
segment_length = bundle.segment_length * bundle.hop_length
context_length = bundle.right_context_length * bundle.hop_length

print(f"Sample rate: {sample_rate}")
print(f"Main segment: {segment_length} frames ({segment_length / sample_rate} seconds)")
print(f"Right context: {context_length} frames ({context_length / sample_rate} seconds)")


Sample rate: 16000
Main segment: 2560 frames (0.16 seconds)
Right context: 640 frames (0.04 seconds)

4. Configure the audio stream

Next, we configure the input audio stream using StreamReader().

For the detail of this API, please refer to the Media Stream API tutorial.

The following audio file was originally published by LibriVox project, and it is in the public domain.


It was re-uploaded for the sake of the tutorial.

src = "https://download.pytorch.org/torchaudio/tutorial-assets/greatpiratestories_00_various.mp3"

streamer = StreamReader(src)
streamer.add_basic_audio_stream(frames_per_chunk=segment_length, sample_rate=bundle.sample_rate)



StreamReaderSourceAudioStream(media_type='audio', codec='mp3', codec_long_name='MP3 (MPEG audio layer 3)', format='fltp', bit_rate=128000, num_frames=0, bits_per_sample=0, metadata={}, sample_rate=44100.0, num_channels=2)
StreamReaderOutputStream(source_index=0, filter_description='aresample=16000,aformat=sample_fmts=fltp')

As previously explained, Emformer RNN-T model expects input data with overlaps; however, Streamer iterates the source media without overlap, so we make a helper structure that caches a part of input data from Streamer as right context and then appends it to the next input data from Streamer.

The following figure illustrates this.

class ContextCacher:
    """Cache the end of input data and prepend the next input data with it.

        segment_length (int): The size of main segment.
            If the incoming segment is shorter, then the segment is padded.
        context_length (int): The size of the context, cached and appended.

    def __init__(self, segment_length: int, context_length: int):
        self.segment_length = segment_length
        self.context_length = context_length
        self.context = torch.zeros([context_length])

    def __call__(self, chunk: torch.Tensor):
        if chunk.size(0) < self.segment_length:
            chunk = torch.nn.functional.pad(chunk, (0, self.segment_length - chunk.size(0)))
        chunk_with_context = torch.cat((self.context, chunk))
        self.context = chunk[-self.context_length :]
        return chunk_with_context

5. Run stream inference

Finally, we run the recognition.

First, we initialize the stream iterator, context cacher, and state and hypothesis that are used by decoder to carry over the decoding state between inference calls.

cacher = ContextCacher(segment_length, context_length)

state, hypothesis = None, None

Next we, run the inference.

For the sake of better display, we create a helper function which processes the source stream up to the given times and call it repeatedly.

stream_iterator = streamer.stream()

def run_inference(num_iter=200):
    global state, hypothesis
    chunks = []
    for i, (chunk,) in enumerate(stream_iterator, start=1):
        segment = cacher(chunk[:, 0])
        features, length = feature_extractor(segment)
        hypos, state = decoder.infer(features, length, 10, state=state, hypothesis=hypothesis)
        hypothesis = hypos[0]
        transcript = token_processor(hypothesis[0], lstrip=False)
        print(transcript, end="", flush=True)

        if i == num_iter:

    return IPython.display.Audio(torch.cat(chunks).T.numpy(), rate=bundle.sample_rate)


forward great pirate's this is aver's recording all thects recordings are in the public dum for more information or please visit liberg recording by james christopher great pirite stories by various edited by josey embodies the romance of theed expression it is a sad but inevable comment on our civilization that so far as the sea is concerned