Shortcuts

quantize_qat

class torch.ao.quantization.quantize_qat(model, run_fn, run_args, inplace=False)[source]

Do quantization aware training and output a quantized model

Parameters:
  • model – input model

  • run_fn – a function for evaluating the prepared model, can be a function that simply runs the prepared model or a training loop

  • run_args – positional arguments for run_fn

Returns:

Quantized model.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources