[docs]classLinear(torch.ao.nn.qat.Linear):r""" A linear module attached with FakeQuantize modules for weight, used for dynamic quantization aware training. We adopt the same interface as `torch.nn.Linear`, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for documentation. Similar to `torch.nn.Linear`, with FakeQuantize modules initialized to default. """def__init__(self,in_features,out_features,bias=True,qconfig=None,device=None,dtype=None,)->None:super().__init__(in_features,out_features,bias,qconfig,device,dtype)ifnottorch.ao.quantization.qconfig._activation_is_memoryless(qconfig):raiseValueError("Dynamic QAT requires a memoryless observer."+"This means a MovingAverage observer with averaging constant equal to 1")
Docs
Access comprehensive developer documentation for PyTorch
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.