Shortcuts

RocCurve#

class ignite.contrib.metrics.RocCurve(output_transform=<function RocCurve.<lambda>>, check_compute_fn=False)[source]#

Compute Receiver operating characteristic (ROC) for binary classification task by accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.roc_curve .

Parameters
  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.

  • check_compute_fn (bool) –

    Default False. If True, sklearn.metrics.roc_curve is run on the first batch of data to ensure there are no issues. User will be warned in case there are any issues computing the function.

Note

RocCurve expects y to be comprised of 0’s and 1’s. y_pred must either be probability estimates or confidence values. To apply an activation to y_pred, use output_transform as shown below:

def sigmoid_output_transform(output):
    y_pred, y = output
    y_pred = torch.sigmoid(y_pred)
    return y_pred, y
avg_precision = RocCurve(sigmoid_output_transform)

Examples

roc_auc = RocCurve()
#The ``output_transform`` arg of the metric can be used to perform a sigmoid on the ``y_pred``.
roc_auc.attach(default_evaluator, 'roc_auc')
y_pred = torch.tensor([0.0474, 0.5987, 0.7109, 0.9997])
y_true = torch.tensor([0, 0, 1, 0])
state = default_evaluator.run([[y_pred, y_true]])
print("FPR", [round(i, 3) for i in state.metrics['roc_auc'][0].tolist()])
print("TPR", [round(i, 3) for i in state.metrics['roc_auc'][1].tolist()])
print("Thresholds", [round(i, 3) for i in state.metrics['roc_auc'][2].tolist()])
FPR [0.0, 0.333, 0.333, 1.0]
TPR [0.0, 0.0, 1.0, 1.0]
Thresholds [2.0, 1.0, 0.711, 0.047]

Methods