HUBERT_BASE¶
- torchaudio.pipelines.HUBERT_BASE¶
HuBERT model (“base” architecture), pre-trained on 960 hours of unlabeled audio from LibriSpeech dataset [Panayotov et al., 2015] (the combination of “train-clean-100”, “train-clean-360”, and “train-other-500”), not fine-tuned.
Originally published by the authors of HuBERT [Hsu et al., 2021] under MIT License and redistributed with the same license. [License, Source]
Please refer to
torchaudio.pipelines.Wav2Vec2Bundle
for the usage.