PyTorch Hub For Researchers

Explore and extend models from the latest cutting edge research.

Discover and publish models to a pre-trained model repository designed for research exploration. Check out the models for Researchers, or learn How It WorksContribute Models.

*This is a beta release – we will be collecting feedback and improving the PyTorch Hub over the coming months.

  • Reset

YOLOv5

Ultralytics YOLOv5 🚀 for object detection, instance segmentation and image classification.

56.8k

Deeplabv3

DeepLabV3 models with ResNet-50, ResNet-101 and MobileNet-V3 backbones

17.5k

AlexNet

The 2012 ImageNet winner achieved a top-5 error of 15.3%, more than 10.8 percentage points lower than that of the runner up.

17.5k

Densenet

Dense Convolutional Network (DenseNet), connects each layer to every other layer in a feed-forward fashion.

17.5k

FCN

Fully-Convolutional Network model with ResNet-50 and ResNet-101 backbones

17.5k

Inception_v3

Also called GoogleNetv3, a famous ConvNet trained on ImageNet from 2015

17.5k

MobileNet v2

Efficient networks optimized for speed and memory, with residual blocks

17.5k

ResNet

Deep residual networks pre-trained on ImageNet

17.5k

ResNext

Next generation ResNets, more efficient and accurate

17.5k

ShuffleNet v2

An efficient ConvNet optimized for speed and memory, pre-trained on ImageNet

17.5k

SqueezeNet

Alexnet-level accuracy with 50x fewer parameters.

17.5k

vgg-nets

Award winning ConvNets from 2014 ImageNet ILSVRC challenge

17.5k

Wide ResNet

Wide Residual Networks

17.5k

Silero Voice Activity Detector

Pre-trained Voice Activity Detector

8.2k

Silero Speech-To-Text Models

A set of compact enterprise-grade pre-trained STT Models for multiple languages.

5.8k

Silero Text-To-Speech Models

A set of compact enterprise-grade pre-trained TTS Models for multiple languages

5.8k

GhostNet

Efficient networks by generating more features from cheap operations

4.4k

SNNMLP

Brain-inspired Multilayer Perceptron with Spiking Neurons

4.4k

Once-for-All

Once-for-all (OFA) decouples training and search, and achieves efficient inference across various edge devices and resource constraints.

1.9k

Open-Unmix

Reference implementation for music source separation

1.5k

SimpleNet

Lets Keep it simple, Using simple architectures to outperform deeper and more complex architectures

53