Shortcuts

torcheval.metrics.functional.perplexity

torcheval.metrics.functional.perplexity(input: Tensor, target: Tensor, ignore_index: Optional[int] = None) Tensor[source]

Perplexity measures how well a model predicts sample data. It is calculated by:

perplexity = exp (sum of negative log likelihood / number of tokens)

Its class version is torcheval.metrics.text.Perplexity.

Parameters:
  • input (Tensor) – Predicted unnormalized scores (i.e., logits) for each token with shape of (n_samples, seq_len, vocab_size)
  • target (Tensor) – Tensor of ground truth vocab index with shape of (n_samples, seq_len).
  • ignore_index (Tensor) – if specified, the target class with ‘ignore_index’ will be ignored when calculating perplexity. The default value is None.
Returns:

perplexity for the input and target.

Return type:

(Tensor)

Examples

>>> import torch
>>> from torcheval.metrics.functional.text import perplexity
>>> input = torch.tensor([[[0.3659, 0.7025, 0.3104], [0.0097, 0.6577, 0.1947]]])
>>> target = torch.tensor([[2, 1]])
>>> perplexity(input, target)
tensor(2.7593, dtype=torch.float64)
>>> input = torch.tensor([[[0.3, 0.7, 0.3, 0.1], [0.5, 0.4, 0.1, 0.4],[0.1, 0.1, 0.2, 0.5]], [[0.1, 0.6, 0.1, 0.5], [0.3, 0.7, 0.3, 0.4], [0.3, 0.7, 0.3, 0.4]]])
>>> target = torch.tensor([[2, 1, 3],  [1, 0, 1]])
>>> perplexity(input, target)
tensor(3.6216, dtype=torch.float64)
>>> input = torch.tensor([[[0.3659, 0.7025, 0.3104], [0.0097, 0.6577, 0.1947]]])
>>> target = torch.tensor([[2, 1]])
>>> perplexity(input, target, ignore_index = 1)
tensor(3.5372, dtype=torch.float64)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources