PrecisionRecallCurve#
- class ignite.contrib.metrics.PrecisionRecallCurve(output_transform=<function PrecisionRecallCurve.<lambda>>, check_compute_fn=False)[source]#
Compute precision-recall pairs for different probability thresholds for binary classification task by accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.precision_recall_curve .
- Parameters
output_transform (Callable) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.check_compute_fn (bool) – Default False. If True, precision_recall_curve is run on the first batch of data to ensure there are no issues. User will be warned in case there are any issues computing the function.
Note
PrecisionRecallCurve expects y to be comprised of 0’s and 1’s. y_pred must either be probability estimates or confidence values. To apply an activation to y_pred, use output_transform as shown below:
def sigmoid_output_transform(output): y_pred, y = output y_pred = torch.sigmoid(y_pred) return y_pred, y avg_precision = PrecisionRecallCurve(sigmoid_output_transform)
Examples
y_pred = torch.tensor([0.0474, 0.5987, 0.7109, 0.9997]) y_true = torch.tensor([0, 0, 1, 1]) prec_recall_curve = PrecisionRecallCurve() prec_recall_curve.attach(default_evaluator, 'prec_recall_curve') state = default_evaluator.run([[y_pred, y_true]]) print("Precision", [round(i, 4) for i in state.metrics['prec_recall_curve'][0].tolist()]) print("Recall", [round(i, 4) for i in state.metrics['prec_recall_curve'][1].tolist()]) print("Thresholds", [round(i, 4) for i in state.metrics['prec_recall_curve'][2].tolist()])
Precision [1.0, 1.0, 1.0] Recall [1.0, 0.5, 0.0] Thresholds [0.7109, 0.9997]
Methods