Shortcuts

HUBERT_XLARGE

torchaudio.pipelines.HUBERT_XLARGE

HuBERT model (“extra large” architecture), pre-trained on 60,000 hours of unlabeled audio from Libri-Light dataset [Kahn et al., 2020], not fine-tuned.

Originally published by the authors of HuBERT [Hsu et al., 2021] under MIT License and redistributed with the same license. [License, Source]

Please refer to torchaudio.pipelines.Wav2Vec2Bundle for the usage.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources