Shortcuts

Introduction || What is DDP || Single-Node Multi-GPU Training || Fault Tolerance || Multi-Node training || minGPT Training

What is Distributed Data Parallel (DDP)

Created On: Sep 27, 2022 | Last Updated: Nov 14, 2024 | Last Verified: Nov 05, 2024

Authors: Suraj Subramanian

What you will learn
  • How DDP works under the hood

  • What is DistributedSampler

  • How gradients are synchronized across GPUs

Prerequisites

Follow along with the video below or on youtube.

This tutorial is a gentle introduction to PyTorch DistributedDataParallel (DDP) which enables data parallel training in PyTorch. Data parallelism is a way to process multiple data batches across multiple devices simultaneously to achieve better performance. In PyTorch, the DistributedSampler ensures each device gets a non-overlapping input batch. The model is replicated on all the devices; each replica calculates gradients and simultaneously synchronizes with the others using the ring all-reduce algorithm.

This illustrative tutorial provides a more in-depth python view of the mechanics of DDP.

Why you should prefer DDP over DataParallel (DP)

DataParallel is an older approach to data parallelism. DP is trivially simple (with just one extra line of code) but it is much less performant. DDP improves upon the architecture in a few ways:

DataParallel

DistributedDataParallel

More overhead; model is replicated and destroyed at each forward pass

Model is replicated only once

Only supports single-node parallelism

Supports scaling to multiple machines

Slower; uses multithreading on a single process and runs into Global Interpreter Lock (GIL) contention

Faster (no GIL contention) because it uses multiprocessing

Further Reading

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources