Shortcuts

RNNTBundle

class torchaudio.pipelines.RNNTBundle[source]

Dataclass that bundles components for performing automatic speech recognition (ASR, speech-to-text) inference with an RNN-T model.

More specifically, the class provides methods that produce the featurization pipeline, decoder wrapping the specified RNN-T model, and output token post-processor that together constitute a complete end-to-end ASR inference pipeline that produces a text sequence given a raw waveform.

It can support non-streaming (full-context) inference as well as streaming inference.

Users should not directly instantiate objects of this class; rather, users should use the instances (representing pre-trained models) that exist within the module, e.g. torchaudio.pipelines.EMFORMER_RNNT_BASE_LIBRISPEECH.

Example
>>> import torchaudio
>>> from torchaudio.pipelines import EMFORMER_RNNT_BASE_LIBRISPEECH
>>> import torch
>>>
>>> # Non-streaming inference.
>>> # Build feature extractor, decoder with RNN-T model, and token processor.
>>> feature_extractor = EMFORMER_RNNT_BASE_LIBRISPEECH.get_feature_extractor()
100%|███████████████████████████████| 3.81k/3.81k [00:00<00:00, 4.22MB/s]
>>> decoder = EMFORMER_RNNT_BASE_LIBRISPEECH.get_decoder()
Downloading: "https://download.pytorch.org/torchaudio/models/emformer_rnnt_base_librispeech.pt"
100%|███████████████████████████████| 293M/293M [00:07<00:00, 42.1MB/s]
>>> token_processor = EMFORMER_RNNT_BASE_LIBRISPEECH.get_token_processor()
100%|███████████████████████████████| 295k/295k [00:00<00:00, 25.4MB/s]
>>>
>>> # Instantiate LibriSpeech dataset; retrieve waveform for first sample.
>>> dataset = torchaudio.datasets.LIBRISPEECH("/home/librispeech", url="test-clean")
>>> waveform = next(iter(dataset))[0].squeeze()
>>>
>>> with torch.no_grad():
>>>     # Produce mel-scale spectrogram features.
>>>     features, length = feature_extractor(waveform)
>>>
>>>     # Generate top-10 hypotheses.
>>>     hypotheses = decoder(features, length, 10)
>>>
>>> # For top hypothesis, convert predicted tokens to text.
>>> text = token_processor(hypotheses[0][0])
>>> print(text)
he hoped there would be stew for dinner turnips and carrots and bruised potatoes and fat mutton pieces to [...]
>>>
>>>
>>> # Streaming inference.
>>> hop_length = EMFORMER_RNNT_BASE_LIBRISPEECH.hop_length
>>> num_samples_segment = EMFORMER_RNNT_BASE_LIBRISPEECH.segment_length * hop_length
>>> num_samples_segment_right_context = (
>>>     num_samples_segment + EMFORMER_RNNT_BASE_LIBRISPEECH.right_context_length * hop_length
>>> )
>>>
>>> # Build streaming inference feature extractor.
>>> streaming_feature_extractor = EMFORMER_RNNT_BASE_LIBRISPEECH.get_streaming_feature_extractor()
>>>
>>> # Process same waveform as before, this time sequentially across overlapping segments
>>> # to simulate streaming inference. Note the usage of ``streaming_feature_extractor`` and ``decoder.infer``.
>>> state, hypothesis = None, None
>>> for idx in range(0, len(waveform), num_samples_segment):
>>>     segment = waveform[idx: idx + num_samples_segment_right_context]
>>>     segment = torch.nn.functional.pad(segment, (0, num_samples_segment_right_context - len(segment)))
>>>     with torch.no_grad():
>>>         features, length = streaming_feature_extractor(segment)
>>>         hypotheses, state = decoder.infer(features, length, 10, state=state, hypothesis=hypothesis)
>>>     hypothesis = hypotheses[0]
>>>     transcript = token_processor(hypothesis[0])
>>>     if transcript:
>>>         print(transcript, end=" ", flush=True)
he hoped there would be stew for dinner turn ips and car rots and bru 'd oes and fat mut ton pieces to [...]
Tutorials using RNNTBundle:
Online ASR with Emformer RNN-T

Online ASR with Emformer RNN-T

Online ASR with Emformer RNN-T
Device ASR with Emformer RNN-T

Device ASR with Emformer RNN-T

Device ASR with Emformer RNN-T

Properties

hop_length

property RNNTBundle.hop_length: int

Number of samples between successive frames in input expected by model.

Type:

int

n_fft

property RNNTBundle.n_fft: int

Size of FFT window to use.

Type:

int

n_mels

property RNNTBundle.n_mels: int

Number of mel spectrogram features to extract from input waveforms.

Type:

int

right_context_length

property RNNTBundle.right_context_length: int

Number of frames in right contextual block in input expected by model.

Type:

int

sample_rate

property RNNTBundle.sample_rate: int

Sample rate (in cycles per second) of input waveforms.

Type:

int

segment_length

property RNNTBundle.segment_length: int

Number of frames in segment in input expected by model.

Type:

int

Methods

get_decoder

RNNTBundle.get_decoder() RNNTBeamSearch[source]

Constructs RNN-T decoder.

Returns:

RNNTBeamSearch

get_feature_extractor

RNNTBundle.get_feature_extractor() FeatureExtractor[source]

Constructs feature extractor for non-streaming (full-context) ASR.

Returns:

FeatureExtractor

get_streaming_feature_extractor

RNNTBundle.get_streaming_feature_extractor() FeatureExtractor[source]

Constructs feature extractor for streaming (simultaneous) ASR.

Returns:

FeatureExtractor

get_token_processor

RNNTBundle.get_token_processor() TokenProcessor[source]

Constructs token processor.

Returns:

TokenProcessor

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources