Shortcuts

Linear

class torch.ao.nn.quantized.dynamic.Linear(in_features, out_features, bias_=True, dtype=torch.qint8)[source][source]

A dynamic quantized linear module with floating point tensor as inputs and outputs. We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for documentation.

Similar to torch.nn.Linear, attributes will be randomly initialized at module creation time and will be overwritten later

Variables
  • weight (Tensor) – the non-learnable quantized weights of the module which are of shape (out_features,in_features)(\text{out\_features}, \text{in\_features}).

  • bias (Tensor) – the non-learnable floating point bias of the module of shape (out_features)(\text{out\_features}). If bias is True, the values are initialized to zero.

Examples:

>>> m = nn.quantized.dynamic.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
classmethod from_float(mod, use_precomputed_fake_quant=False)[source][source]

Create a dynamic quantized module from a float module or qparams_dict

Parameters

mod (Module) – a float module, either produced by torch.ao.quantization utilities or provided by the user

classmethod from_reference(ref_qlinear)[source][source]

Create a (fbgemm/qnnpack) dynamic quantized module from a reference quantized module :param ref_qlinear: a reference quantized module, either produced by :type ref_qlinear: Module :param torch.ao.quantization functions or provided by the user:

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources