Shortcuts

prepare_qat_fx

class torch.quantization.quantize_fx.prepare_qat_fx(model, qconfig_dict, prepare_custom_config_dict=None, backend_config_dict=None)[source]

Prepare a model for quantization aware training

Parameters
  • model (*) – torch.nn.Module model, must be in train mode

  • qconfig_dict (*) – see prepare_fx()

  • prepare_custom_config_dict (*) – see prepare_fx()

  • backend_config_dict (*) – see prepare_fx()

Returns

A GraphModule with fake quant modules (configured by qconfig_dict), ready for quantization aware training

Example:

import torch
from torch.ao.quantization import get_default_qat_qconfig
from torch.ao.quantization import prepare_fx

qconfig = get_default_qat_qconfig('fbgemm')
def train_loop(model, train_data):
    model.train()
    for image, target in data_loader:
        ...

float_model.train()
qconfig_dict = {"": qconfig}
prepared_model = prepare_fx(float_model, qconfig_dict)
# Run calibration
train_loop(prepared_model, train_loop)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources