Shortcuts

WAV2VEC2_ASR_LARGE_LV60K_100H

torchaudio.pipelines.WAV2VEC2_ASR_LARGE_LV60K_100H

Wav2vec 2.0 model (“large-lv60k” architecture with an extra linear module), pre-trained on 60,000 hours of unlabeled audio from Libri-Light dataset [Kahn et al., 2020], and fine-tuned for ASR on 100 hours of transcribed audio from LibriSpeech dataset [Panayotov et al., 2015] (“train-clean-100” subset).

Originally published by the authors of wav2vec 2.0 [Baevski et al., 2020] under MIT License and redistributed with the same license. [License, Source]

Please refer to torchaudio.pipelines.Wav2Vec2ASRBundle for the usage.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources