Shortcuts

leaky_relu

class torch.nn.quantized.functional.leaky_relu(input, negative_slope=0.01, inplace=False, scale, zero_point)Tensor[source]

Applies element-wise, LeakyReLU(x)=max(0,x)+negative_slopemin(0,x)\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x)

Parameters
  • input – Quaintized input

  • negative_slope – The slope of the negative input

  • inplace – Inplace modification of the input tensor

  • scale – Scale and zero point of the output tensor.

  • zero_point – Scale and zero point of the output tensor.

See LeakyReLU for more details.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources