PyTorch Recipes

Recipes are bite-sized, actionable examples of how to use specific PyTorch features, different from our full-length tutorials.

Loading data in PyTorch

Learn how to use PyTorch packages to prepare and load common datasets for your model.


Defining a Neural Network

Learn how to use PyTorch's torch.nn package to create and define a neural network for the MNIST dataset.


What is a state_dict in PyTorch

Learn how state_dict objects and Python dictionaries are used in saving or loading models from PyTorch.


Saving and loading models for inference in PyTorch

Learn about the two approaches for saving and loading models for inference in PyTorch - via the state_dict and via the entire model.


Saving and loading a general checkpoint in PyTorch

Saving and loading a general checkpoint model for inference or resuming training can be helpful for picking up where you last left off. In this recipe, explore how to save and load multiple checkpoints.


Saving and loading multiple models in one file using PyTorch

In this recipe, learn how saving and loading multiple models can be helpful for reusing models that you have previously trained.


Warmstarting model using parameters from a different model in PyTorch

Learn how warmstarting the training process by partially loading a model or loading a partial model can help your model converge much faster than training from scratch.


Saving and loading models across devices in PyTorch

Learn how saving and loading models across devices (CPUs and GPUs) is relatively straightforward using PyTorch.


Zeroing out gradients in PyTorch

Learn when you should zero out gradients and how doing so can help increase the accuracy of your model.


PyTorch Benchmark

Learn how to use PyTorch's benchmark module to measure and compare the performance of your code


PyTorch Benchmark (quick start)

Learn how to measure snippet run times and collect instructions.


PyTorch Profiler

Learn how to use PyTorch's profiler to measure operators time and memory consumption


Model Interpretability using Captum

Learn how to use Captum attribute the predictions of an image classifier to their corresponding image features and visualize the attribution results.


How to use TensorBoard with PyTorch

Learn basic usage of TensorBoard with PyTorch, and how to visualize data in TensorBoard UI


Dynamic Quantization

Apply dynamic quantization to a simple LSTM model.


TorchScript for Deployment

Learn how to export your trained model in TorchScript format and how to load your TorchScript model in C++ and do inference.


Deploying with Flask

Learn how to use Flask, a lightweight web server, to quickly setup a web API from your trained PyTorch model.


PyTorch Mobile Performance Recipes

List of recipes for performance optimizations for using PyTorch on Mobile (Android and iOS).


Making Android Native Application That Uses PyTorch Android Prebuilt Libraries

Learn how to make Android application from the scratch that uses LibTorch C++ API and uses TorchScript model with custom C++ operator.


Fuse Modules recipe

Learn how to fuse a list of PyTorch modules into a single module to reduce the model size before quantization.


Quantization for Mobile Recipe

Learn how to reduce the model size and make it run faster without losing much on accuracy.


Script and Optimize for Mobile

Learn how to convert the model to TorchScipt and (optional) optimize it for mobile apps.


Model Preparation for iOS Recipe

Learn how to add the model in an iOS project and use PyTorch pod for iOS.


Model Preparation for Android Recipe

Learn how to add the model in an Android project and use the PyTorch library for Android.


Profiling PyTorch RPC-Based Workloads

How to use the PyTorch profiler to profile RPC-based workloads.


Automatic Mixed Precision

Use torch.cuda.amp to reduce runtime and save memory on NVIDIA GPUs.


Performance Tuning Guide

Tips for achieving optimal performance.


Shard Optimizer States with ZeroRedundancyOptimizer

How to use ZeroRedundancyOptimizer to reduce memory consumption.


Direct Device-to-Device Communication with TensorPipe RPC

How to use RPC with direct GPU-to-GPU communication.



Access comprehensive developer documentation for PyTorch

View Docs


Get in-depth tutorials for beginners and advanced developers

View Tutorials


Find development resources and get your questions answered

View Resources