Shortcuts

torch.nn.functional.log_softmax

torch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None)[source]

Applies a softmax followed by a logarithm.

While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.

See LogSoftmax for more details.

Parameters
  • input (Tensor) – input

  • dim (int) – A dimension along which log_softmax will be computed.

  • dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is cast to dtype before the operation is performed. This is useful for preventing data type overflows. Default: None.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources