• Docs >
  • Metrics >
  • torcheval.metrics.MulticlassBinnedPrecisionRecallCurve
Shortcuts

torcheval.metrics.MulticlassBinnedPrecisionRecallCurve

class torcheval.metrics.MulticlassBinnedPrecisionRecallCurve(*, num_classes: int, threshold: Union[int, List[float], Tensor] = 100, optimization: str = 'vectorized', device: Optional[device] = None)[source]

Compute precision recall curve with given thresholds. Its functional version is torcheval.metrics.functional.multiclass_binned_precision_recall_curve(). See also BinaryBinnedPrecisionRecallCurve

Parameters:
  • num_classes (int) – Number of classes.
  • threshold (Union[int, List[float], torch.Tensor], Optional) – a integer representing number of bins, a list of thresholds, or a tensor of thresholds.
  • optimization (str) – Choose the optimization to use. Accepted values: “vectorized” and “memory”. The “vectorized” optimization makes more use of vectorization but uses more memory; the “memory” optimization uses less memory but takes more steps. Here are the tradeoffs between these two options: - “vectorized”: consumes more memory but is faster on some hardware, e.g. modern GPUs. - “memory”: consumes less memory but can be significantly slower on some hardware, e.g. modern GPUs Generally, on GPUs, the “vectorized” optimization requires more memory but is faster; the “memory” optimization requires less memory but is slower. On CPUs, the “memory” optimization is recommended in all cases; it uses less memory and is faster.

Examples:

>>> import torch
>>> from torcheval.metrics import MulticlassBinnedPrecisionRecallCurve
>>> metric = MulticlassBinnedPrecisionRecallCurve(num_classes=4)
>>> input = torch.tensor([[0.1, 0.1, 0.1, 0.1], [0.5, 0.5, 0.5, 0.5], [0.7, 0.7, 0.7, 0.7], [0.8, 0.8, 0.8, 0.8]])
>>> target = torch.tensor([0, 1, 2, 3])
>>> threshold = 10
>>> metric.update(input, target)
>>> metric.compute()
([tensor([0.2500, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000]),
tensor([0.2500, 0.3333, 0.3333, 0.3333, 0.3333, 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000]),
tensor([0.2500, 0.3333, 0.3333, 0.3333, 0.3333, 0.5000, 0.5000, 0.0000, 1.0000, 1.0000, 1.0000]),
tensor([0.2500, 0.3333, 0.3333, 0.3333, 0.3333, 0.5000, 0.5000, 1.0000, 1.0000, 1.0000, 1.0000])],
[tensor([1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
tensor([1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0.]),
tensor([1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0.]),
tensor([1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0.])],
tensor([0.0000, 0.1111, 0.2222, 0.3333, 0.4444, 0.5556, 0.6667, 0.7778, 0.8889, 1.0000]))
__init__(*, num_classes: int, threshold: Union[int, List[float], Tensor] = 100, optimization: str = 'vectorized', device: Optional[device] = None) None[source]

Initialize a metric object and its internal states.

Use self._add_state() to initialize state variables of your metric class. The state variables should be either torch.Tensor, a list of torch.Tensor, or a dictionary with torch.Tensor as values

Methods

__init__(*, num_classes[, threshold, ...]) Initialize a metric object and its internal states.
compute()
returns:
load_state_dict(state_dict[, strict]) Loads metric state variables from state_dict.
merge_state(metrics) Implement this method to update the current metric's state variables to be the merged states of the current metric and input metrics.
reset() Reset the metric state variables to their default value.
state_dict() Save metric state variables in state_dict.
to(device, *args, **kwargs) Move tensors in metric state variables to device.
update(input, target) Update states with the ground truth labels and predictions.

Attributes

device The last input device of Metric.to().

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources