Note
Click here to download the full example code
Learn the Basics || Quickstart || Tensors || Datasets & DataLoaders || Transforms || Build Model || Autograd || Optimization || Save & Load Model
Quickstart¶
Created On: Feb 09, 2021 | Last Updated: Aug 27, 2024 | Last Verified: Not Verified
This section runs through the API for common tasks in machine learning. Refer to the links in each section to dive deeper.
Working with data¶
PyTorch has two primitives to work with data:
torch.utils.data.DataLoader
and torch.utils.data.Dataset
.
Dataset
stores the samples and their corresponding labels, and DataLoader
wraps an iterable around
the Dataset
.
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
PyTorch offers domain-specific libraries such as TorchText, TorchVision, and TorchAudio, all of which include datasets. For this tutorial, we will be using a TorchVision dataset.
The torchvision.datasets
module contains Dataset
objects for many real-world vision data like
CIFAR, COCO (full list here). In this tutorial, we
use the FashionMNIST dataset. Every TorchVision Dataset
includes two arguments: transform
and
target_transform
to modify the samples and labels respectively.
# Download training data from open datasets.
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
# Download test data from open datasets.
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz
0%| | 0.00/26.4M [00:00<?, ?B/s]
0%| | 65.5k/26.4M [00:00<01:12, 362kB/s]
1%| | 197k/26.4M [00:00<00:34, 757kB/s]
2%|1 | 459k/26.4M [00:00<00:22, 1.17MB/s]
6%|6 | 1.61M/26.4M [00:00<00:05, 4.20MB/s]
14%|#3 | 3.57M/26.4M [00:00<00:03, 7.36MB/s]
33%|###3 | 8.78M/26.4M [00:00<00:00, 18.8MB/s]
48%|####7 | 12.7M/26.4M [00:01<00:00, 20.5MB/s]
67%|######6 | 17.7M/26.4M [00:01<00:00, 27.3MB/s]
82%|########1 | 21.6M/26.4M [00:01<00:00, 25.9MB/s]
100%|##########| 26.4M/26.4M [00:01<00:00, 19.3MB/s]
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz
0%| | 0.00/29.5k [00:00<?, ?B/s]
100%|##########| 29.5k/29.5k [00:00<00:00, 326kB/s]
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz
0%| | 0.00/4.42M [00:00<?, ?B/s]
1%|1 | 65.5k/4.42M [00:00<00:11, 363kB/s]
5%|5 | 229k/4.42M [00:00<00:06, 691kB/s]
21%|##1 | 950k/4.42M [00:00<00:01, 2.20MB/s]
87%|########6 | 3.83M/4.42M [00:00<00:00, 7.65MB/s]
100%|##########| 4.42M/4.42M [00:00<00:00, 6.10MB/s]
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz
0%| | 0.00/5.15k [00:00<?, ?B/s]
100%|##########| 5.15k/5.15k [00:00<00:00, 32.6MB/s]
Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw
We pass the Dataset
as an argument to DataLoader
. This wraps an iterable over our dataset, and supports
automatic batching, sampling, shuffling and multiprocess data loading. Here we define a batch size of 64, i.e. each element
in the dataloader iterable will return a batch of 64 features and labels.
batch_size = 64
# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
for X, y in test_dataloader:
print(f"Shape of X [N, C, H, W]: {X.shape}")
print(f"Shape of y: {y.shape} {y.dtype}")
break
Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28])
Shape of y: torch.Size([64]) torch.int64
Read more about loading data in PyTorch.
Creating Models¶
To define a neural network in PyTorch, we create a class that inherits
from nn.Module. We define the layers of the network
in the __init__
function and specify how data will pass through the network in the forward
function. To accelerate
operations in the neural network, we move it to the GPU or MPS if available.
# Get cpu, gpu or mps device for training.
device = (
"cuda"
if torch.cuda.is_available()
else "mps"
if torch.backends.mps.is_available()
else "cpu"
)
print(f"Using {device} device")
# Define model
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
print(model)
Using cuda device
NeuralNetwork(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=10, bias=True)
)
)
Read more about building neural networks in PyTorch.
Optimizing the Model Parameters¶
To train a model, we need a loss function and an optimizer.
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
In a single training loop, the model makes predictions on the training dataset (fed to it in batches), and backpropagates the prediction error to adjust the model’s parameters.
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
loss.backward()
optimizer.step()
optimizer.zero_grad()
if batch % 100 == 0:
loss, current = loss.item(), (batch + 1) * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
We also check the model’s performance against the test dataset to ensure it is learning.
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
The training process is conducted over several iterations (epochs). During each epoch, the model learns parameters to make better predictions. We print the model’s accuracy and loss at each epoch; we’d like to see the accuracy increase and the loss decrease with every epoch.
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
test(test_dataloader, model, loss_fn)
print("Done!")
Epoch 1
-------------------------------
loss: 2.303494 [ 64/60000]
loss: 2.294637 [ 6464/60000]
loss: 2.277102 [12864/60000]
loss: 2.269977 [19264/60000]
loss: 2.254235 [25664/60000]
loss: 2.237146 [32064/60000]
loss: 2.231055 [38464/60000]
loss: 2.205037 [44864/60000]
loss: 2.203240 [51264/60000]
loss: 2.170889 [57664/60000]
Test Error:
Accuracy: 53.9%, Avg loss: 2.168588
Epoch 2
-------------------------------
loss: 2.177787 [ 64/60000]
loss: 2.168083 [ 6464/60000]
loss: 2.114910 [12864/60000]
loss: 2.130412 [19264/60000]
loss: 2.087473 [25664/60000]
loss: 2.039670 [32064/60000]
loss: 2.054274 [38464/60000]
loss: 1.985457 [44864/60000]
loss: 1.996023 [51264/60000]
loss: 1.917241 [57664/60000]
Test Error:
Accuracy: 60.2%, Avg loss: 1.920374
Epoch 3
-------------------------------
loss: 1.951705 [ 64/60000]
loss: 1.919516 [ 6464/60000]
loss: 1.808730 [12864/60000]
loss: 1.846550 [19264/60000]
loss: 1.740618 [25664/60000]
loss: 1.698733 [32064/60000]
loss: 1.708889 [38464/60000]
loss: 1.614436 [44864/60000]
loss: 1.646475 [51264/60000]
loss: 1.524308 [57664/60000]
Test Error:
Accuracy: 61.4%, Avg loss: 1.547092
Epoch 4
-------------------------------
loss: 1.612695 [ 64/60000]
loss: 1.570870 [ 6464/60000]
loss: 1.424730 [12864/60000]
loss: 1.489542 [19264/60000]
loss: 1.367256 [25664/60000]
loss: 1.373464 [32064/60000]
loss: 1.376744 [38464/60000]
loss: 1.304962 [44864/60000]
loss: 1.347154 [51264/60000]
loss: 1.230661 [57664/60000]
Test Error:
Accuracy: 62.7%, Avg loss: 1.260891
Epoch 5
-------------------------------
loss: 1.337803 [ 64/60000]
loss: 1.313278 [ 6464/60000]
loss: 1.151837 [12864/60000]
loss: 1.252142 [19264/60000]
loss: 1.123048 [25664/60000]
loss: 1.159531 [32064/60000]
loss: 1.175011 [38464/60000]
loss: 1.115554 [44864/60000]
loss: 1.160974 [51264/60000]
loss: 1.062730 [57664/60000]
Test Error:
Accuracy: 64.6%, Avg loss: 1.087374
Done!
Read more about Training your model.
Saving Models¶
A common way to save a model is to serialize the internal state dictionary (containing the model parameters).
torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
Saved PyTorch Model State to model.pth
Loading Models¶
The process for loading a model includes re-creating the model structure and loading the state dictionary into it.
model = NeuralNetwork().to(device)
model.load_state_dict(torch.load("model.pth", weights_only=True))
<All keys matched successfully>
This model can now be used to make predictions.
classes = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot",
]
model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
x = x.to(device)
pred = model(x)
predicted, actual = classes[pred[0].argmax(0)], classes[y]
print(f'Predicted: "{predicted}", Actual: "{actual}"')
Predicted: "Ankle boot", Actual: "Ankle boot"
Read more about Saving & Loading your model.
Total running time of the script: ( 1 minutes 5.795 seconds)