• Tutorials >
  • Optimizing Vision Transformer Model for Deployment

Optimizing Vision Transformer Model for Deployment

Jeff Tang, Geeta Chauhan

Vision Transformer models apply the cutting-edge attention-based transformer models, introduced in Natural Language Processing to achieve all kinds of the state of the art (SOTA) results, to Computer Vision tasks. Facebook Data-efficient Image Transformers DeiT is a Vision Transformer model trained on ImageNet for image classification.

In this tutorial, we will first cover what DeiT is and how to use it, then go through the complete steps of scripting, quantizing, optimizing, and using the model in iOS and Android apps. We will also compare the performance of quantized, optimized and non-quantized, non-optimized models, and show the benefits of applying quantization and optimization to the model along the steps.

What is DeiT

Convolutional Neural Networks (CNNs) have been the main models for image classification since deep learning took off in 2012, but CNNs typically require hundreds of millions of images for training to achieve the SOTA results. DeiT is a vision transformer model that requires a lot less data and computing resources for training to compete with the leading CNNs in performing image classification, which is made possible by two key components of of DeiT:

  • Data augmentation that simulates training on a much larger dataset;

  • Native distillation that allows the transformer network to learn from a CNN’s output.

DeiT shows that Transformers can be successfully applied to computer vision tasks, with limited access to data and resources. For more details on DeiT, see the repo and paper.

Classifying Images with DeiT

Follow the README.md at the DeiT repository for detailed information on how to classify images using DeiT, or for a quick test, first install the required packages:

pip install torch torchvision timm pandas requests

To run in Google Colab, install dependencies by running the following command:

!pip install timm pandas requests

then run the script below:

from PIL import Image
import torch
import timm
import requests
import torchvision.transforms as transforms
from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD

# should be 1.8.0

model = torch.hub.load('facebookresearch/deit:main', 'deit_base_patch16_224', pretrained=True)

transform = transforms.Compose([
    transforms.Resize(256, interpolation=3),

img = Image.open(requests.get("https://raw.githubusercontent.com/pytorch/ios-demo-app/master/HelloWorld/HelloWorld/HelloWorld/image.png", stream=True).raw)
img = transform(img)[None,]
out = model(img)
clsidx = torch.argmax(out)
Downloading: "https://github.com/facebookresearch/deit/zipball/main" to /var/lib/ci-user/.cache/torch/hub/main.zip
/var/lib/ci-user/.cache/torch/hub/facebookresearch_deit_main/models.py:63: UserWarning:

Overwriting deit_tiny_patch16_224 in registry with models.deit_tiny_patch16_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.

/var/lib/ci-user/.cache/torch/hub/facebookresearch_deit_main/models.py:78: UserWarning:

Overwriting deit_small_patch16_224 in registry with models.deit_small_patch16_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.

/var/lib/ci-user/.cache/torch/hub/facebookresearch_deit_main/models.py:93: UserWarning:

Overwriting deit_base_patch16_224 in registry with models.deit_base_patch16_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.

/var/lib/ci-user/.cache/torch/hub/facebookresearch_deit_main/models.py:108: UserWarning:

Overwriting deit_tiny_distilled_patch16_224 in registry with models.deit_tiny_distilled_patch16_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.

/var/lib/ci-user/.cache/torch/hub/facebookresearch_deit_main/models.py:123: UserWarning:

Overwriting deit_small_distilled_patch16_224 in registry with models.deit_small_distilled_patch16_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.

/var/lib/ci-user/.cache/torch/hub/facebookresearch_deit_main/models.py:138: UserWarning:

Overwriting deit_base_distilled_patch16_224 in registry with models.deit_base_distilled_patch16_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.

/var/lib/ci-user/.cache/torch/hub/facebookresearch_deit_main/models.py:153: UserWarning:

Overwriting deit_base_patch16_384 in registry with models.deit_base_patch16_384. This is because the name being registered conflicts with an existing name. Please check if this is not expected.

/var/lib/ci-user/.cache/torch/hub/facebookresearch_deit_main/models.py:168: UserWarning:

Overwriting deit_base_distilled_patch16_384 in registry with models.deit_base_distilled_patch16_384. This is because the name being registered conflicts with an existing name. Please check if this is not expected.

Downloading: "https://dl.fbaipublicfiles.com/deit/deit_base_patch16_224-b5f2ef4d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/deit_base_patch16_224-b5f2ef4d.pth

  0%|          | 0.00/330M [00:00<?, ?B/s]
  8%|7         | 25.4M/330M [00:00<00:01, 266MB/s]
 15%|#5        | 50.8M/330M [00:00<00:01, 213MB/s]
 22%|##1       | 71.8M/330M [00:00<00:01, 203MB/s]
 28%|##7       | 91.4M/330M [00:00<00:01, 202MB/s]
 35%|###4      | 114M/330M [00:00<00:01, 214MB/s]
 41%|####      | 135M/330M [00:00<00:00, 206MB/s]
 47%|####6     | 155M/330M [00:00<00:00, 204MB/s]
 53%|#####2    | 174M/330M [00:00<00:00, 202MB/s]
 59%|#####9    | 196M/330M [00:00<00:00, 211MB/s]
 66%|######5   | 217M/330M [00:01<00:00, 213MB/s]
 73%|#######2  | 240M/330M [00:01<00:00, 219MB/s]
 81%|########  | 266M/330M [00:01<00:00, 236MB/s]
 88%|########8 | 292M/330M [00:01<00:00, 246MB/s]
 97%|#########6| 319M/330M [00:01<00:00, 257MB/s]
100%|##########| 330M/330M [00:01<00:00, 226MB/s]

The output should be 269, which, according to the ImageNet list of class index to labels file, maps to timber wolf, grey wolf, gray wolf, Canis lupus.

Now that we have verified that we can use the DeiT model to classify images, let’s see how to modify the model so it can run on iOS and Android apps.

Scripting DeiT

To use the model on mobile, we first need to script the model. See the Script and Optimize recipe for a quick overview. Run the code below to convert the DeiT model used in the previous step to the TorchScript format that can run on mobile.

model = torch.hub.load('facebookresearch/deit:main', 'deit_base_patch16_224', pretrained=True)
scripted_model = torch.jit.script(model)
Using cache found in /var/lib/ci-user/.cache/torch/hub/facebookresearch_deit_main

The scripted model file fbdeit_scripted.pt of size about 346MB is generated.

Quantizing DeiT

To reduce the trained model size significantly while keeping the inference accuracy about the same, quantization can be applied to the model. Thanks to the transformer model used in DeiT, we can easily apply dynamic-quantization to the model, because dynamic quantization works best for LSTM and transformer models (see here for more details).

Now run the code below:

# Use 'x86' for server inference (the old 'fbgemm' is still available but 'x86' is the recommended default) and ``qnnpack`` for mobile inference.
backend = "x86" # replaced with ``qnnpack`` causing much worse inference speed for quantized model on this notebook
model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend

quantized_model = torch.quantization.quantize_dynamic(model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8)
scripted_quantized_model = torch.jit.script(quantized_model)
/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/ao/quantization/observer.py:220: UserWarning:

Please use quant_min and quant_max to specify the range for observers.                     reduce_range will be deprecated in a future release of PyTorch.

This generates the scripted and quantized version of the model fbdeit_quantized_scripted.pt, with size about 89MB, a 74% reduction of the non-quantized model size of 346MB!

You can use the scripted_quantized_model to generate the same inference result:

out = scripted_quantized_model(img)
clsidx = torch.argmax(out)
# The same output 269 should be printed

Optimizing DeiT

The final step before using the quantized and scripted model on mobile is to optimize it:

from torch.utils.mobile_optimizer import optimize_for_mobile
optimized_scripted_quantized_model = optimize_for_mobile(scripted_quantized_model)

The generated fbdeit_optimized_scripted_quantized.pt file has about the same size as the quantized, scripted, but non-optimized model. The inference result remains the same.

out = optimized_scripted_quantized_model(img)
clsidx = torch.argmax(out)
# Again, the same output 269 should be printed

Using Lite Interpreter

To see how much model size reduction and inference speed up the Lite Interpreter can result in, let’s create the lite version of the model.

ptl = torch.jit.load("fbdeit_optimized_scripted_quantized_lite.ptl")

Although the lite model size is comparable to the non-lite version, when running the lite version on mobile, the inference speed up is expected.

Comparing Inference Speed

To see how the inference speed differs for the four models - the original model, the scripted model, the quantized-and-scripted model, the optimized-quantized-and-scripted model - run the code below:

with torch.autograd.profiler.profile(use_cuda=False) as prof1:
    out = model(img)
with torch.autograd.profiler.profile(use_cuda=False) as prof2:
    out = scripted_model(img)
with torch.autograd.profiler.profile(use_cuda=False) as prof3:
    out = scripted_quantized_model(img)
with torch.autograd.profiler.profile(use_cuda=False) as prof4:
    out = optimized_scripted_quantized_model(img)
with torch.autograd.profiler.profile(use_cuda=False) as prof5:
    out = ptl(img)

print("original model: {:.2f}ms".format(prof1.self_cpu_time_total/1000))
print("scripted model: {:.2f}ms".format(prof2.self_cpu_time_total/1000))
print("scripted & quantized model: {:.2f}ms".format(prof3.self_cpu_time_total/1000))
print("scripted & quantized & optimized model: {:.2f}ms".format(prof4.self_cpu_time_total/1000))
print("lite model: {:.2f}ms".format(prof5.self_cpu_time_total/1000))
original model: 109.41ms
scripted model: 108.43ms
scripted & quantized model: 99.80ms
scripted & quantized & optimized model: 116.21ms
lite model: 107.82ms

The results running on a Google Colab are:

original model: 1236.69ms
scripted model: 1226.72ms
scripted & quantized model: 593.19ms
scripted & quantized & optimized model: 598.01ms
lite model: 600.72ms

The following results summarize the inference time taken by each model and the percentage reduction of each model relative to the original model.

import pandas as pd
import numpy as np

df = pd.DataFrame({'Model': ['original model','scripted model', 'scripted & quantized model', 'scripted & quantized & optimized model', 'lite model']})
df = pd.concat([df, pd.DataFrame([
    ["{:.2f}ms".format(prof1.self_cpu_time_total/1000), "0%"],
    columns=['Inference Time', 'Reduction'])], axis=1)


        Model                             Inference Time    Reduction
0   original model                             1236.69ms           0%
1   scripted model                             1226.72ms        0.81%
2   scripted & quantized model                  593.19ms       52.03%
3   scripted & quantized & optimized model      598.01ms       51.64%
4   lite model                                  600.72ms       51.43%
                                    Model  ... Reduction
0                          original model  ...        0%
1                          scripted model  ...     0.89%
2              scripted & quantized model  ...     8.79%
3  scripted & quantized & optimized model  ...    -6.21%
4                              lite model  ...     1.46%

[5 rows x 3 columns]

'\n        Model                             Inference Time    Reduction\n0\toriginal model                             1236.69ms           0%\n1\tscripted model                             1226.72ms        0.81%\n2\tscripted & quantized model                  593.19ms       52.03%\n3\tscripted & quantized & optimized model      598.01ms       51.64%\n4\tlite model                                  600.72ms       51.43%\n'


Access comprehensive developer documentation for PyTorch

View Docs


Get in-depth tutorials for beginners and advanced developers

View Tutorials


Find development resources and get your questions answered

View Resources