Shortcuts

prepare_qat

class torch.ao.quantization.prepare_qat(model, mapping=None, inplace=False)[source]

Prepares a copy of the model for quantization calibration or quantization-aware training and converts it to quantized version.

Quantization configuration should be assigned preemptively to individual submodules in .qconfig attribute.

Parameters:
  • model – input model to be modified in-place

  • mapping – dictionary that maps float modules to quantized modules to be replaced.

  • inplace – carry out model transformations in-place, the original module is mutated

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources