Shortcuts

ROC_AUC#

class ignite.contrib.metrics.ROC_AUC(output_transform=<function ROC_AUC.<lambda>>, check_compute_fn=False, device=device(type='cpu'))[source]#

Computes Area Under the Receiver Operating Characteristic Curve (ROC AUC) accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.roc_auc_score .

Parameters
  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.

  • check_compute_fn (bool) – Default False. If True, roc_curve is run on the first batch of data to ensure there are no issues. User will be warned in case there are any issues computing the function.

  • device (Union[str, device]) – optional device specification for internal storage.

Note

ROC_AUC expects y to be comprised of 0’s and 1’s. y_pred must either be probability estimates or confidence values. To apply an activation to y_pred, use output_transform as shown below:

def sigmoid_output_transform(output):
    y_pred, y = output
    y_pred = torch.sigmoid(y_pred)
    return y_pred, y
avg_precision = ROC_AUC(sigmoid_output_transform)

Examples

roc_auc = ROC_AUC()
#The ``output_transform`` arg of the metric can be used to perform a sigmoid on the ``y_pred``.
roc_auc.attach(default_evaluator, 'roc_auc')
y_pred = torch.tensor([[0.0474], [0.5987], [0.7109], [0.9997]])
y_true = torch.tensor([[0], [0], [1], [0]])
state = default_evaluator.run([[y_pred, y_true]])
print(state.metrics['roc_auc'])
0.6666...

Methods