# Quantization Operation coverage¶

Quantized Tensors support a limited subset of data manipulation methods of the regular full-precision tensor. For NN operators included in PyTorch, we restrict support to:

8 bit weights (data_type = qint8)

8 bit activations (data_type = quint8)

Note that operator implementations currently only
support per channel quantization for weights of the **conv** and **linear**
operators. Furthermore the minimum and the maximum of the input data is
mapped linearly to the minimum and the maximum of the quantized data
type such that zero is represented with no quantization error.

Additional data types and quantization schemes can be implemented through the custom operator mechanism.

Many operations for quantized tensors are available under the same API as full
float version in `torch`

or `torch.nn`

. Quantized version of NN modules that
perform re-quantization are available in `torch.nn.quantized`

. Those
operations explicitly take output quantization parameters (scale and zero_point) in
the operation signature.

In addition, we also support fused versions corresponding to common fusion patterns that impact quantization at: torch.nn.intrinsic.quantized.

For quantization aware training, we support modules prepared for quantization aware training at torch.nn.qat and torch.nn.intrinsic.qat

The following operation list is sufficient to cover typical CNN and RNN models

## Quantized `torch.Tensor`

operations¶

Operations that are available from the `torch`

namespace or as methods on
Tensor for quantized tensors:

`quantize_per_tensor()`

- Convert float tensor to quantized tensor with per-tensor scale and zero point`quantize_per_channel()`

- Convert float tensor to quantized tensor with per-channel scale and zero pointView-based operations like

`view()`

,`as_strided()`

,`expand()`

,`flatten()`

,`select()`

, python-style indexing, etc - work as on regular tensor (if quantization is not per-channel)`copy_()`

— Copies src to self in-place`clone()`

— Returns a deep copy of the passed-in tensor`dequantize()`

— Convert quantized tensor to float tensor`equal()`

— Compares two tensors, returns true if quantization parameters and all integer elements are the same`int_repr()`

— Prints the underlying integer representation of the quantized tensor`max()`

— Returns the maximum value of the tensor (reduction only)`mean()`

— Mean function. Supported variants: reduction, dim, out`min()`

— Returns the minimum value of the tensor (reduction only)`q_scale()`

— Returns the scale of the per-tensor quantized tensor`q_zero_point()`

— Returns the zero_point of the per-tensor quantized zero point`q_per_channel_scales()`

— Returns the scales of the per-channel quantized tensor`q_per_channel_zero_points()`

— Returns the zero points of the per-channel quantized tensor`q_per_channel_axis()`

— Returns the channel axis of the per-channel quantized tensor`resize_()`

— In-place resize`sort()`

— Sorts the tensor`topk()`

— Returns k largest values of a tensor

`torch.nn.functional`

¶

Basic activations are supported.

`relu()`

— Rectified linear unit (copy)`relu_()`

— Rectified linear unit (inplace)`elu()`

- ELU`max_pool2d()`

- Maximum pooling`adaptive_avg_pool2d()`

- Adaptive average pooling`avg_pool2d()`

- Average pooling`interpolate()`

- Interpolation`hardsigmoid()`

- Hardsigmoid`hardswish()`

- Hardswish`hardtanh()`

- Hardtanh`upsample()`

- Upsampling`upsample_bilinear()`

- Bilinear Upsampling`upsample_nearest()`

- Upsampling Nearest

`torch.nn.intrinsic`

¶

Fused modules are provided for common patterns in CNNs. Combining several operations together (like convolution and relu) allows for better quantization accuracy

torch.nn.intrinsic — float versions of the modules, can be swapped with quantized version 1 to 1:

`ConvBn1d`

— Conv1d + BatchNorm1d`ConvBn2d`

— Conv2d + BatchNorm`ConvBn3d`

— Conv3d + BatchNorm3d`ConvBnReLU1d`

— Conv1d + BatchNorm1d + ReLU`ConvBnReLU2d`

— Conv2d + BatchNorm + ReLU`ConvBnReLU3d`

— Conv3d + BatchNorm3d + ReLU`ConvReLU1d`

— Conv1d + ReLU`ConvReLU2d`

— Conv2d + ReLU`ConvReLU3d`

— Conv3d + ReLU`LinearReLU`

— Linear + ReLU

torch.nn.intrinsic.qat — versions of layers for quantization-aware training:

`ConvBn2d`

— Conv2d + BatchNorm`ConvBn3d`

— Conv3d + BatchNorm3d`ConvBnReLU2d`

— Conv2d + BatchNorm + ReLU`ConvBnReLU3d`

— Conv3d + BatchNorm3d + ReLU`ConvReLU2d`

— Conv2d + ReLU`ConvReLU3d`

— Conv3d + ReLU`LinearReLU`

— Linear + ReLU

torch.nn.intrinsic.quantized — quantized version of fused layers for inference (no BatchNorm variants as it’s usually folded into convolution for inference):

`LinearReLU`

— Linear + ReLU`ConvReLU1d`

— 1D Convolution + ReLU`ConvReLU2d`

— 2D Convolution + ReLU`ConvReLU3d`

— 3D Convolution + ReLU

## torch.nn.qat¶

Layers for the quantization-aware training

## torch.quantization¶

Functions for eager mode quantization:

`add_observer_()`

— Adds observer for the leaf modules (if quantization configuration is provided)`add_quant_dequant()`

— Wraps the leaf child module using`QuantWrapper`

`convert()`

— Converts float module with observers into its quantized counterpart. Must have quantization configuration`get_observer_dict()`

— Traverses the module children and collects all observers into a`dict`

`prepare()`

— Prepares a copy of a model for quantization`prepare_qat()`

— Prepares a copy of a model for quantization aware training`propagate_qconfig_()`

— Propagates quantization configurations through the module hierarchy and assign them to each leaf module`quantize()`

— Function for eager mode post training static quantization`quantize_dynamic()`

— Function for eager mode post training dynamic quantization`quantize_qat()`

— Function for eager mode quantization aware training function`swap_module()`

— Swaps the module with its quantized counterpart (if quantizable and if it has an observer)`default_eval_fn()`

— Default evaluation function used by the`torch.quantization.quantize()`

Functions for FX graph mode quantization: *

`prepare_fx()`

- Function for preparing the model for post training quantization with FX graph mode quantization *`prepare_qat_fx()`

- Function for preparing the model for quantization aware training with FX graph mode quantization *`convert_fx()`

- Function for converting a prepared model to a quantized model with FX graph mode quantization- Quantization configurations
`QConfig`

— Quantization configuration class`default_qconfig`

— Same as`QConfig(activation=default_observer, weight=default_weight_observer)`

(See`QConfig`

)`default_qat_qconfig`

— Same as`QConfig(activation=default_fake_quant, weight=default_weight_fake_quant)`

(See`QConfig`

)`default_dynamic_qconfig`

— Same as`QConfigDynamic(weight=default_weight_observer)`

(See`QConfigDynamic`

)`float16_dynamic_qconfig`

— Same as`QConfigDynamic(weight=NoopObserver.with_args(dtype=torch.float16))`

(See`QConfigDynamic`

)

- Stubs
`DeQuantStub`

- placeholder module for dequantize() operation in float-valued models`QuantStub`

- placeholder module for quantize() operation in float-valued models`QuantWrapper`

— wraps the module to be quantized. Inserts the`QuantStub`

and

Observers for computing the quantization parameters

Default Observers. The rest of observers are available from

`torch.quantization.observer`

:`default_observer`

— Same as`MinMaxObserver.with_args(reduce_range=True)`

`default_weight_observer`

— Same as`MinMaxObserver.with_args(dtype=torch.qint8, qscheme=torch.per_tensor_symmetric)`

`Observer`

— Abstract base class for observers`MinMaxObserver`

— Derives the quantization parameters from the running minimum and maximum of the observed tensor inputs (per tensor variant)`MovingAverageMinMaxObserver`

— Derives the quantization parameters from the running averages of the minimums and maximums of the observed tensor inputs (per tensor variant)`PerChannelMinMaxObserver`

— Derives the quantization parameters from the running minimum and maximum of the observed tensor inputs (per channel variant)`MovingAveragePerChannelMinMaxObserver`

— Derives the quantization parameters from the running averages of the minimums and maximums of the observed tensor inputs (per channel variant)`HistogramObserver`

— Derives the quantization parameters by creating a histogram of running minimums and maximums.

- Observers that do not compute the quantization parameters:
`RecordingObserver`

— Records all incoming tensors. Used for debugging only.`NoopObserver`

— Pass-through observer. Used for situation when there are no quantization parameters (i.e. quantization to`float16`

)

- FakeQuantize module
`FakeQuantize`

— Module for simulating the quantization/dequantization at training time

## torch.nn.quantized¶

Quantized version of standard NN layers.

`Quantize`

— Quantization layer, used to automatically replace`QuantStub`

`DeQuantize`

— Dequantization layer, used to replace`DeQuantStub`

`FloatFunctional`

— Wrapper class to make stateless float operations stateful so that they can be replaced with quantized versions`QFunctional`

— Wrapper class for quantized versions of stateless operations like`torch.add`

`Conv1d`

— 1D convolution`Conv2d`

— 2D convolution`Conv3d`

— 3D convolution`Linear`

— Linear (fully-connected) layer`MaxPool2d`

— 2D max pooling`ReLU6`

— Rectified linear unit with cut-off at quantized representation of 6`ELU`

— ELU`Hardswish`

— Hardswish`BatchNorm2d`

— BatchNorm2d.*Note: this module is usually fused with Conv or Linear. Performance on ARM is not optimized*.`BatchNorm3d`

— BatchNorm3d.*Note: this module is usually fused with Conv or Linear. Performance on ARM is not optimized*.`LayerNorm`

— LayerNorm.*Note: performance on ARM is not optimized*.`GroupNorm`

— GroupNorm.*Note: performance on ARM is not optimized*.`InstanceNorm1d`

— InstanceNorm1d.*Note: performance on ARM is not optimized*.`InstanceNorm2d`

— InstanceNorm2d.*Note: performance on ARM is not optimized*.`InstanceNorm3d`

— InstanceNorm3d.*Note: performance on ARM is not optimized*.

## torch.nn.quantized.dynamic¶

Layers used in dynamically quantized models (i.e. quantized only on weights)

## torch.nn.quantized.functional¶

Functional versions of quantized NN layers (many of them accept explicit quantization output parameters)

`adaptive_avg_pool2d()`

— 2D adaptive average pooling`avg_pool2d()`

— 2D average pooling`avg_pool3d()`

— 3D average pooling`conv1d()`

— 1D convolution`conv2d()`

— 2D convolution`conv3d()`

— 3D convolution`interpolate()`

— Down-/up- sampler`linear()`

— Linear (fully-connected) op`max_pool2d()`

— 2D max pooling`elu()`

— ELU`hardsigmoid()`

— Hardsigmoid`hardswish()`

— Hardswish`hardtanh()`

— Hardtanh`upsample()`

— Upsampler. Will be deprecated in favor of`interpolate()`

`upsample_bilinear()`

— Bilinear upsampler. Will be deprecated in favor of`upsample_nearest()`

— Nearest neighbor upsampler. Will be deprecated in favor of

## Quantized dtypes and quantization schemes¶

`torch.qscheme`

— Type to describe the quantization scheme of a tensor. Supported types:`torch.per_tensor_affine`

— per tensor, asymmetric`torch.per_channel_affine`

— per channel, asymmetric`torch.per_tensor_symmetric`

— per tensor, symmetric`torch.per_channel_symmetric`

— per channel, symmetric

`torch.dtype`

— Type to describe the data. Supported types:`torch.quint8`

— 8-bit unsigned integer`torch.qint8`

— 8-bit signed integer`torch.qint32`

— 32-bit signed integer