Shortcuts

LinearReLU

class torch.nn.intrinsic.quantized.dynamic.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8)[source]

A LinearReLU module fused from Linear and ReLU modules that can be used for dynamic quantization. Supports both, FP16 and INT8 quantization.

We adopt the same interface as torch.ao.nn.quantized.dynamic.Linear.

Variables:

torch.ao.nn.quantized.dynamic.Linear (Same as) –

Examples:

>>> m = nn.intrinsic.quantized.dynamic.LinearReLU(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources