Shortcuts

torcheval.metrics.MultilabelAUPRC

class torcheval.metrics.MultilabelAUPRC(*, num_labels: int, average: Optional[str] = 'macro', device: Optional[device] = None)[source]

Compute AUPRC, also called Average Precision, which is the area under the Precision-Recall Curve, for multilabel classification.

Precision is defined as \(\frac{T_p}{T_p+F_p}\), it is the probability that a positive prediction from the model is a true positive. Recall is defined as \(\frac{T_p}{T_p+F_n}\), it is the probability that a true positive is predicted to be positive by the model.

The precision-recall curve plots the recall on the x axis and the precision on the y axis, both of which are bounded between 0 and 1. This function returns the area under that graph. If the area is near one, the model supports a threshold which correctly identifies a high percentage of true positives while also rejecting enough false examples so that most of the true predictions are true positives.

In the multilabel version of AUPRC, the input and target tensors are 2-dimensional. The rows of each tensor are associated with a particular example and the columns are associated with a particular class.

For the target tensor, the entry of the r’th row and c’th column (r and c are 0-indexed) is 1 if the r’th example belongs to the c’th class, and 0 if not. For the input tensor, the entry in the same position is the output of the classification model prediciting the inclusion of the r’th example in the c’th class. Note that in the multilabel setting, multiple labels are allowed to apply to a single sample. This stands in contrast to the multiclass sample, in which there may be more than 2 distinct classes but each sample must have exactly one class.

The results of N label multilabel auprc without an average is equivalent to binary auprc with N tasks if:

  1. the input is transposed, in binary labelification examples are associated with columns, whereas they are associated with rows in multilabel classification.
  2. the target is transposed for the same reason

See examples below for more details on the connection between Multilabel and Binary AUPRC.

The functional version of this metric is torcheval.metrics.functional.multilabel_auprc(). See also BinaryAUPRC, MulticlassAUPRC

Parameters:
  • num_labels (int) – Number of labels.
  • average (str, optional) –
    • 'macro' [default]:
      Calculate metrics for each label separately, and return their unweighted mean.
    • None:
      Calculate the metric for each label separately, and return the metric for every label.

Examples:

>>> import torch
>>> from torcheval.metrics import MultilabelAUPRC
>>> metric = MultilabelAUPRC(num_labels=3, average=None)
>>> input = torch.tensor([[0.75, 0.05, 0.35], [0.45, 0.75, 0.05], [0.05, 0.55, 0.75], [0.05, 0.65, 0.05]])
>>> target = torch.tensor([[1, 0, 1], [0, 0, 0], [0, 1, 1], [1, 1, 1]])
>>> metric.update(input, target)
>>> metric.compute()
tensor([0.7500, 0.5833, 0.9167])

>>> metric = MultilabelAUPRC(num_labels=3, average='macro')
>>> input = torch.tensor([[0.75, 0.05, 0.35], [0.05, 0.55, 0.75]])
>>> target = torch.tensor([[1, 0, 1], [0, 1, 1]])
>>> metric.update(input, target)
>>> metric.compute()
tensor(1.)
>>> input = torch.tensor([[0.45, 0.75, 0.05], [0.05, 0.65, 0.05]])
>>> target = torch.tensor([[0, 0, 0], [1, 1, 1]])
>>> metric.update(input, target)
>>> metric.compute()
tensor(0.7500)

Connection to BinaryAUPRC
>>> metric = MultilabelAUPRC(num_labels=3, average=None)
>>> input = torch.tensor([[0.1, 0, 0], [0, 1, 0], [0.1, 0.2, 0.7], [0, 0, 1]])
>>> target = torch.tensor([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 1]])
>>> metric.update(input, target)
>>> metric.compute()
tensor([0.5000, 1.0000, 1.0000])

the above is equivalent to
>>> from torcheval.metrics import BinaryAUPRC
>>> metric = BinaryAUPRC(num_tasks=3)
>>> input = torch.tensor([[0.1, 0, 0.1, 0], [0, 1, 0.2, 0], [0, 0, 0.7, 1]])
>>> target = torch.tensor([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 1]])
>>> metric.update(input, target)
>>> metric.compute()
tensor([0.5000, 1.0000, 1.0000])
__init__(*, num_labels: int, average: Optional[str] = 'macro', device: Optional[device] = None) None[source]

Initialize a metric object and its internal states.

Use self._add_state() to initialize state variables of your metric class. The state variables should be either torch.Tensor, a list of torch.Tensor, or a dictionary with torch.Tensor as values

Methods

__init__(*, num_labels[, average, device]) Initialize a metric object and its internal states.
compute() Implement this method to compute and return the final metric value from state variables.
load_state_dict(state_dict[, strict]) Loads metric state variables from state_dict.
merge_state(metrics) Implement this method to update the current metric's state variables to be the merged states of the current metric and input metrics.
reset() Reset the metric state variables to their default value.
state_dict() Save metric state variables in state_dict.
to(device, *args, **kwargs) Move tensors in metric state variables to device.
update(input, target) Update states with the ground truth labels and predictions.

Attributes

device The last input device of Metric.to().

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources