[docs]classLinearReLU(nnqat.Linear,nni._FusedModule):r""" A LinearReLU module fused from Linear and ReLU modules, attached with FakeQuantize modules for weight, used in quantization aware training. We adopt the same interface as :class:`torch.nn.Linear`. Similar to `torch.nn.intrinsic.LinearReLU`, with FakeQuantize modules initialized to default. Attributes: weight: fake quant module for weight Examples:: >>> m = nn.qat.LinearReLU(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30]) """_FLOAT_MODULE=nni.LinearReLUdef__init__(self,in_features,out_features,bias=True,qconfig=None):super(LinearReLU,self).__init__(in_features,out_features,bias,qconfig)defforward(self,input):returnF.relu(F.linear(input,self.weight_fake_quant(self.weight),self.bias))@classmethoddeffrom_float(cls,mod):returnsuper(LinearReLU,cls).from_float(mod)defto_float(self):linear=torch.nn.Linear(self.in_features,self.out_features,self.biasisnotNone)linear.weight=torch.nn.Parameter(self.weight.detach())ifself.biasisnotNone:linear.bias=torch.nn.Parameter(self.bias.detach())relu=torch.nn.ReLU()returntorch.nn.intrinsic.LinearReLU(linear,relu)
Docs
Access comprehensive developer documentation for PyTorch
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.