Shortcuts

torch.fake_quantize_per_tensor_affine

torch.fake_quantize_per_tensor_affine(input, scale, zero_point, quant_min, quant_max) Tensor

Returns a new tensor with the data in input fake quantized using scale, zero_point, quant_min and quant_max.

output=min(quant_max,max(quant_min,std::nearby_int(input/scale)+zero_point))\text{output} = min( \text{quant\_max}, max( \text{quant\_min}, \text{std::nearby\_int}(\text{input} / \text{scale}) + \text{zero\_point} ) )
Parameters:
  • input (Tensor) – the input value(s), torch.float32 tensor

  • scale (double scalar or float32 Tensor) – quantization scale

  • zero_point (int64 scalar or int32 Tensor) – quantization zero_point

  • quant_min (int64) – lower bound of the quantized domain

  • quant_max (int64) – upper bound of the quantized domain

Returns:

A newly fake_quantized torch.float32 tensor

Return type:

Tensor

Example:

>>> x = torch.randn(4)
>>> x
tensor([ 0.0552,  0.9730,  0.3973, -1.0780])
>>> torch.fake_quantize_per_tensor_affine(x, 0.1, 0, 0, 255)
tensor([0.1000, 1.0000, 0.4000, 0.0000])
>>> torch.fake_quantize_per_tensor_affine(x, torch.tensor(0.1), torch.tensor(0), 0, 255)
tensor([0.6000, 0.4000, 0.0000, 0.0000])

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources