HUBERT_XLARGE¶
- torchaudio.pipelines.HUBERT_XLARGE¶
HuBERT model (“extra large” architecture), pre-trained on 60,000 hours of unlabeled audio from Libri-Light dataset [Kahn et al., 2020], not fine-tuned.
Originally published by the authors of HuBERT [Hsu et al., 2021] under MIT License and redistributed with the same license. [License, Source]
Please refer to
torchaudio.pipelines.Wav2Vec2Bundle
for the usage.