Shortcuts

torcheval.metrics.functional.multilabel_binned_auprc

torcheval.metrics.functional.multilabel_binned_auprc(input: Tensor, target: Tensor, num_labels: Optional[int] = None, *, threshold: Union[int, List[float], Tensor] = 100, average: Optional[str] = 'macro', optimization: str = 'vectorized') Tuple[Tensor, Tensor][source]

Binned Version of AUPRC, which is the area under the AUPRC Curve, for multilabel classification. Its class version is torcheval.metrics.MultilabelBinnedAUPRC.

Computation is done by computing the area under the precision/recall curve; precision and recall are computed for the buckets defined by threshold.

Parameters:
  • input (Tensor) – Tensor of label predictions It should be probabilities or logits with shape of (n_samples, n_labels).
  • target (Tensor) – Tensor of ground truth labels with shape of (n_samples, n_labels).
  • num_labels (int, optional) – Number of labels.
  • threshold (Tensor, int, List[float]) – Either an integer representing the number of bins, a list of thresholds, or a tensor of thresholds. The same thresholds will be used for all tasks. If threshold is a tensor, it must be 1D. If list or tensor is given, the first element must be 0 and the last must be 1.
  • average (str, optional) –
    • 'macro' [default]:
      Calculate metrics for each label separately, and return their unweighted mean.
    • None:
      Calculate the metric for each label separately, and return the metric for every label.
  • optimization (str) – Choose the optimization to use. Accepted values: “vectorized” and “memory”. The “vectorized” optimization makes more use of vectorization but uses more memory; the “memory” optimization uses less memory but takes more steps. Here are the tradeoffs between these two options: - “vectorized”: consumes more memory but is faster on some hardware, e.g. modern GPUs. - “memory”: consumes less memory but can be significantly slower on some hardware, e.g. modern GPUs Generally, on GPUs, the “vectorized” optimization requires more memory but is faster; the “memory” optimization requires less memory but is slower. On CPUs, the “memory” optimization is recommended in all cases; it uses less memory and is faster.

Examples:

>>> import torch
>>> from torcheval.metrics.functional import multilabel_binned_auprc
>>> input = torch.tensor([[0.75, 0.05, 0.35], [0.45, 0.75, 0.05], [0.05, 0.55, 0.75], [0.05, 0.65, 0.05]])
>>> target = torch.tensor([[1, 0, 1], [0, 0, 0], [0, 1, 1], [1, 1, 1]])
>>> multilabel_binned_auprc(input, target, num_labels=3, threshold=5, average='macro')
(tensor([0.7500, 0.6667, 0.9167]),
tensor([0.0000, 0.1000, 0.4000, 0.7000, 0.8000, 1.0000]))
>>> threshold = torch.tensor([0.0, 0.1, 0.4, 0.7, 0.8, 1.0])
>>> multilabel_binned_auprc(input, target, num_labels=3, threshold=threshold, average=None)
tensor(0.7500)
>>> multilabel_binned_auprc(input, target, num_labels=3, threshold=threshold, average='macro')
tensor([0.7500, 0.5833, 0.9167])

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources