WAV2VEC2_ASR_LARGE_10M¶
- torchaudio.pipelines.WAV2VEC2_ASR_LARGE_10M¶
Wav2vec 2.0 model (“large” architecture with an extra linear module), pre-trained on 960 hours of unlabeled audio from LibriSpeech dataset [Panayotov et al., 2015] (the combination of “train-clean-100”, “train-clean-360”, and “train-other-500”), and fine-tuned for ASR on 10 minutes of transcribed audio from Libri-Light dataset [Kahn et al., 2020] (“train-10min” subset).
Originally published by the authors of wav2vec 2.0 [Baevski et al., 2020] under MIT License and redistributed with the same license. [License, Source]
Please refer to
torchaudio.pipelines.Wav2Vec2ASRBundle
for the usage.