Skip to main content

PyTorch Hub For Researchers

Explore and extend models from the latest cutting edge research.

Discover and publish models to a pre-trained model repository designed for research exploration. Check out the models for Researchers, or learn How It WorksContribute Models.

*This is a beta release – we will be collecting feedback and improving the PyTorch Hub over the coming months.

  • Reset

YOLOv5

Ultralytics YOLOv5 🚀 for object detection, instance segmentation and image classification.

56.0k

MobileNet v2

Efficient networks optimized for speed and memory, with residual blocks

17.3k

ResNet

Deep residual networks pre-trained on ImageNet

17.3k

ResNext

Next generation ResNets, more efficient and accurate

17.3k

ShuffleNet v2

An efficient ConvNet optimized for speed and memory, pre-trained on ImageNet

17.3k

SqueezeNet

Alexnet-level accuracy with 50x fewer parameters.

17.3k

vgg-nets

Award winning ConvNets from 2014 ImageNet ILSVRC challenge

17.3k

Wide ResNet

Wide Residual Networks

17.3k

Deeplabv3

DeepLabV3 models with ResNet-50, ResNet-101 and MobileNet-V3 backbones

17.3k

AlexNet

The 2012 ImageNet winner achieved a top-5 error of 15.3%, more than 10.8 percentage points lower than that of the runner up.

17.3k

Densenet

Dense Convolutional Network (DenseNet), connects each layer to every other layer in a feed-forward fashion.

17.3k

FCN

Fully-Convolutional Network model with ResNet-50 and ResNet-101 backbones

17.3k

Inception_v3

Also called GoogleNetv3, a famous ConvNet trained on ImageNet from 2015

17.3k

Silero Voice Activity Detector

Pre-trained Voice Activity Detector

7.3k

Silero Speech-To-Text Models

A set of compact enterprise-grade pre-trained STT Models for multiple languages.

5.6k

Silero Text-To-Speech Models

A set of compact enterprise-grade pre-trained TTS Models for multiple languages

5.6k

SNNMLP

Brain-inspired Multilayer Perceptron with Spiking Neurons

4.3k

GhostNet

Efficient networks by generating more features from cheap operations

4.3k

Once-for-All

Once-for-all (OFA) decouples training and search, and achieves efficient inference across various edge devices and resource constraints.

1.9k

Open-Unmix

Reference implementation for music source separation

1.4k

SimpleNet

Lets Keep it simple, Using simple architectures to outperform deeper and more complex architectures

53