torcheval.metrics.MulticlassAUPRC¶
-
class
torcheval.metrics.
MulticlassAUPRC
(*, num_classes: int, average: Optional[str] = 'macro', device: Optional[device] = None)[source]¶ Compute AUPRC, also called Average Precision, which is the area under the Precision-Recall Curve, for multiclass classification.
Precision is defined as \(\frac{T_p}{T_p+F_p}\), it is the probability that a positive prediction from the model is a true positive. Recall is defined as \(\frac{T_p}{T_p+F_n}\), it is the probability that a true positive is predicted to be positive by the model.
The precision-recall curve plots the recall on the x axis and the precision on the y axis, both of which are bounded between 0 and 1. This function returns the area under that graph. If the area is near one, the model supports a threshold which correctly identifies a high percentage of true positives while also rejecting enough false examples so that most of the true predictions are true positives.
In the multiclass version of auprc, the target tensor is a 1 dimensional and contains an integer entry representing the class for each example in the input tensor. Each class is considered independently in a one-vs-all fashion, examples for that class are labeled condition true and all other classes are considered condition false.
The results of N class multiclass auprc without an average is equivalent to binary auprc with N tasks if:
- the input is transposed, in binary classification examples are associated with columns, whereas they are associated with rows in multiclass classification.
- the target is translated from the form [1,0,1] to the form [[0,1,0], [1,0,1]]
See examples below for more details on the connection between Multiclass and Binary AUPRC.
The functional version of this metric is
torcheval.metrics.functional.multiclass_auprc()
. See alsoBinaryAUPRC
,MultilabelAUPRC
Parameters: - num_classes (int) – Number of classes.
- average (str, optional) –
'macro'
[default]:- Calculate metrics for each class separately, and return their unweighted mean.
None
:- Calculate the metric for each class separately, and return the metric for every class.
Examples:
>>> import torch >>> from torcheval.metrics import MulticlassAUPRC >>> metric = MulticlassAUPRC(num_classes=3) >>> input = torch.tensor([[0.1, 0.1, 0.1], [0.5, 0.5, 0.5], [0.7, 0.7, 0.7], [0.8, 0.8, 0.8]]) >>> target = torch.tensor([0, 2, 1, 1]) >>> metric.update(input, target) >>> metric.compute() tensor(0.5278) >>> metric = MulticlassAUPRC(num_classes=3) >>> input = torch.tensor([[0.5, .2, 3], [2, 1, 6]]) >>> target = torch.tensor([0, 2]) >>> metric.update(input, target) >>> metric.compute() tensor(0.5000) >>> input = torch.tensor([[5, 3, 2], [.2, 2, 3], [3, 3, 3]]) >>> target = torch.tensor([2, 2, 1]) >>> metric.update(input, target) >>> metric.compute() tensor(0.4833) Connection to BinaryAUPRC >>> metric = MulticlassAUPRC(num_classes=3, average=None) >>> input = torch.tensor([[0.1, 0, 0], [0, 1, 0], [0.1, 0.2, 0.7], [0, 0, 1]]) >>> target = torch.tensor([0, 1, 2, 2]) >>> metric.update(input, target) >>> metric.compute() tensor([0.5000, 1.0000, 1.0000]) the above is equivalent to >>> from torcheval.metrics import BinaryAUPRC >>> metric = BinaryAUPRC(num_tasks=3) >>> input = torch.tensor([[0.1, 0, 0.1, 0], [0, 1, 0.2, 0], [0, 0, 0.7, 1]]) >>> target = torch.tensor([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 1]]) >>> metric.update(input, target) >>> metric.compute() tensor([0.5000, 1.0000, 1.0000])
-
__init__
(*, num_classes: int, average: Optional[str] = 'macro', device: Optional[device] = None) None [source]¶ Initialize a metric object and its internal states.
Use
self._add_state()
to initialize state variables of your metric class. The state variables should be eithertorch.Tensor
, a list oftorch.Tensor
, or a dictionary withtorch.Tensor
as values
Methods
__init__
(*, num_classes[, average, device])Initialize a metric object and its internal states. compute
()Implement this method to compute and return the final metric value from state variables. load_state_dict
(state_dict[, strict])Loads metric state variables from state_dict. merge_state
(metrics)Implement this method to update the current metric's state variables to be the merged states of the current metric and input metrics. reset
()Reset the metric state variables to their default value. state_dict
()Save metric state variables in state_dict. to
(device, *args, **kwargs)Move tensors in metric state variables to device. update
(input, target)Update states with the ground truth labels and predictions. Attributes
device
The last input device of Metric.to()
.