Shortcuts

torcheval.metrics.functional.multiclass_binned_auprc

torcheval.metrics.functional.multiclass_binned_auprc(input: Tensor, target: Tensor, num_classes: Optional[int] = None, *, threshold: Union[int, List[float], Tensor] = 100, average: Optional[str] = 'macro', optimization: str = 'vectorized') Tuple[Tensor, Tensor][source]

Binned Version of AUPRC, which is the area under the AUPRC Curve, for multiclass classification. Its class version is torcheval.metrics.MulticlassBinnedAUPRC.

Computation is done by computing the area under the precision/recall curve; precision and recall are computed for the buckets defined by threshold.

Parameters:
  • input (Tensor) – Tensor of label predictions It should be probabilities or logits with shape of (n_samples, n_classes).
  • target (Tensor) – Tensor of ground truth labels with shape of (n_samples, ).
  • num_classes (int) – Number of classes.
  • threshold (Tensor, int, List[float]) – Either an integer representing the number of bins, a list of thresholds, or a tensor of thresholds. The same thresholds will be used for all tasks. If threshold is a tensor, it must be 1D. If list or tensor is given, the first element must be 0 and the last must be 1.
  • average (str, optional) –
    • 'macro' [default]:
      Calculate metrics for each class separately, and return their unweighted mean.
    • None:
      Calculate the metric for each class separately, and return the metric for every class.
  • optimization (str) – Choose the optimization to use. Accepted values: “vectorized” and “memory”. The “vectorized” optimization makes more use of vectorization but uses more memory; the “memory” optimization uses less memory but takes more steps. Here are the tradeoffs between these two options: - “vectorized”: consumes more memory but is faster on some hardware, e.g. modern GPUs. - “memory”: consumes less memory but can be significantly slower on some hardware, e.g. modern GPUs Generally, on GPUs, the “vectorized” optimization requires more memory but is faster; the “memory” optimization requires less memory but is slower. On CPUs, the “memory” optimization is recommended in all cases; it uses less memory and is faster.

Examples:

>>> import torch
>>> from torcheval.metrics.functional import multiclass_binned_auroc
>>> input = torch.tensor([[0.1, 0.2, 0.1], [0.4, 0.2, 0.1], [0.6, 0.1, 0.2], [0.4, 0.2, 0.3], [0.6, 0.2, 0.4]])
>>> target = torch.tensor([0, 1, 2, 1, 0])
>>> multiclass_binned_auprc(input, target, num_classes=3, threshold=5, average='macro')
(tensor(0.35), tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000]))
>>> multiclass_binned_auprc(input, target, num_classes=3, threshold=5, average=None)
(tensor([0.4500, 0.4000, 0.2000]),
tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000]))
>>> input = torch.tensor([[0.1, 0.2, 0.1, 0.4], [0.4, 0.2, 0.1, 0.7], [0.6, 0.1, 0.2, 0.4], [0.4, 0.2, 0.3, 0.2], [0.6, 0.2, 0.4, 0.5]])
>>> target = torch.tensor([0, 1, 2, 1, 0])
>>> threshold = torch.tensor([0.0, 0.1, 0.4, 0.7, 0.8, 1.0])
>>> multiclass_binned_auprc(input, target, num_classes=4, threshold=threshold, average='macro')
(tensor(0.24375),
tensor([0.0, 0.1, 0.4, 0.7, 0.8, 1.0]))
>>> multiclass_binned_auprc(input, target, num_classes=4, threshold=threshold, average=None)
(tensor([0.3250, 0.2000, 0.2000, 0.2500]),
tensor([0.0, 0.1, 0.4, 0.7, 0.8, 1.0]))

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources