WAVLM_LARGE¶
- torchaudio.pipelines.WAVLM_LARGE¶
WavLM Large model (“large” architecture), pre-trained on 60,000 hours of Libri-Light dataset [Kahn et al., 2020], 10,000 hours of GigaSpeech [Chen et al., 2021], and 24,000 hours of VoxPopuli [Wang et al., 2021], not fine-tuned.
Originally published by the authors of WavLM [Chen et al., 2022] under MIT License and redistributed with the same license. [License, Source]
Please refer to
torchaudio.pipelines.Wav2Vec2Bundle
for the usage.