• Docs >
  • Multi-node finetuning
Shortcuts

Multi-node finetuning

Congratulations! You’ve finally escaped the struggles of being “GPU poor” and now have access to a multi-node setup. You can bid farewell to the days of sweating over memory-efficient optimizations, but get ready for new challenges as you navigate the complexities of distributed computing.

You will learn:
  • Why multi-node training is useful

  • How to set up the torchtune package on a SLURM cluster

  • How to fine-tune a Llama3.3 70B model w/ full parameter updates (not LoRA)

Prerequisites
  • Be familiar with distributed training in torchtune

  • Already know basic SLURM commands

Advantages of multi-node training

More machines means more memory! This is cool for several reasons:

  1. Bigger models: With more memory, you can train larger models such as Llama3.1 405B, Deepseek-V3, and more.

  2. Longer data: For many fine-tuning tasks like writing code, it’s helpful to have long context lengths; however longer context length means more memory needed for activations.

  3. Higher quality: With more memory, you can do full parameter updates (not LoRA) and use optimizers like AdamW (not low-precision optimizers), both of which can potentially improve the quality of your training.

  4. Faster training: With the ability to fit more data in memory, you can use higher batch sizes and turn off memory optimizations like activation checkpointing thereby decreasing the time it takes for training to complete.

Note

Low inter-node bandwidth & FSDP We utilize PyTorch’s Fully Sharded Data Parallel to distribute models over multiple devices. In order to distribute training, FSDP runs an all-gather operation for each forward pass and an all-gather (usually) plus a reduce-scatter operation for each backwards pass. These operations (usually) block training from continuing until completed and with a slow inter-node connection, training speed may be reduced. For more on this, please refer to this Github Issue.

Training Llama3.3 70B on 2 nodes

Let’s get training! We’ll be utilizing a common cluster workflow manager called SLURM and assume you have a decent working knowledge of SLURM for this tutorial. First, we need to install torchtune. Although pretty much as straightforward as the normal install instructions, it’s recommended that you install the package into a virtual environment that is accessible from all nodes in your cluster like a shared filesystem.

Next, we need to download the Llama3.3 70B model to your shared filesystem. You’ll need to make sure you have the correct credentials following the steps outlined here.

$ tune download meta-llama/Llama-3.3-70B-Instruct --ignore-patterns "consolidated/*.pth" --output-dir SHARED_FS/Llama-3.3-70B-Instruct

Now that we have a downloaded model, let’s check out our example SLURM bash script.

#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.

# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

# ---------- SBATCH commands ---------- #
#SBATCH --job-name=torchtune-multi-node
#SBATCH --ntasks=2
#SBATCH --nodes=2
#SBATCH --gpus-per-task=8
#SBATCH --cpus-per-task=96
#SBATCH --partition=train

# ---------- Set env variables ---------- #
# Grab the IP for head node:
# You may need to set this to the fully qualified domain name of your head node
nodes=( $( scontrol show hostnames $SLURM_JOB_NODELIST ) )
nodes_array=($nodes)
head_node=${nodes_array[0]}
head_node_ip=$(srun --nodes=1 --ntasks=1 -w "$head_node" hostname --ip-address)
echo Node IP: $head_node_ip

# You might need to explicitly set the network interface for distributed backends:
# export NCCL_SOCKET_IFNAME=...
# export GLOO_SOCKET_IFNAME=...

export TORCH_DIST_INIT_BARRIER=1
export LOGLEVEL=INFO

# ---------- Launch training ---------- #
# You probably want to load in a virtual env w/ conda...
# module load conda
# conda activate torchtune
# ...or venv
# source torchtune/bin/activate

SHARED_FS=/mnt/slurm # <-- Replace w/ your filesystem
CHECKPOINT_DIR="$SHARED_FS/Llama-3.3-70B-Instruct"
OUTPUT_DIR="$SHARED_FS/Llama3.3-70B-fft-output"

# Adjust sbatch --ntasks and sbatch --nodes above and --nnodes below to your specific node count
srun tune run --nnodes 2 --nproc_per_node 8 --rdzv_id 101 --rdzv_backend c10d --rdzv_endpoint "$head_node_ip:29500" \
    full_finetune_distributed --config llama3_3/70B_full_multinode checkpoint_dir=$CHECKPOINT_DIR output_dir=$OUTPUT_DIR

There’s a lot of information in this script but here are the high-level parts:

  • We utilize SLURM specific commands like number of nodes, tasks, CPUs available, etc.

  • We are using torchrun and the full_finetune_distributed recipe to train just like on single node

  • You can consider several cluster-specific environment variables (NCCL_BUFFSIZE, NCCL_DEBUG, FI_PROVIDER, etc.) in order to maximize GPU utilization, debug, and more.

Note

We may need to explicitly set the network interface for distributed backends. You can read more about PyTorch distributed backends here but it’s also helpful to know that you can find your network interface by running ipconfig from a specific node.

After we update the shared filesystem in the bash script, we can launch using sbatch.

sbatch full_finetune_multinode.slurm

And the output of squeue should show our job running:

$ squeue
JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
1     train         torchtun slurm R       0:03      2 slurm-worker-[1-2]

Once training has completed, which should take roughly seven minutes in total (880 tok/s) with the default config, we can follow the instructions here in order to upload our beautiful new model to the Hugging Face Hub!

Future development

We’ve covered the basics of how to launch a fine-tuning job with SLURM on two nodes with FSDP. There’s still more things we’re cooking up, including…

2D parallelism: Utilizing both FSDP and tensor parallelism in what is commonly referred to as 2D parallelism will decrease memory requirements even further, allowing us to lean even harder into the advantages listed above.

Longer context (ring attention, etc): More memory and more machines means we can train on longer sequences and tag advantage of neat tricks like ring attention, where tokens are split across GPUs. You can read more about our plans for torchtune in this Github RFC.

Want other optimizations? Feel free to let us know by opening up a Github Issue on our repo or dropping us a line in Discord!

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources