ignite.metrics.ClassificationReport(beta=1, output_dict=False, output_transform=<function <lambda>>, device=device(type='cpu'), is_multilabel=False, labels=None)[source]#

Build a text report showing the main classification metrics. The report resembles in functionality to scikit-learn classification_report The underlying implementation doesn’t use the sklearn function.

  • beta (int) – weight of precision in harmonic mean

  • output_dict (bool) – If True, return output as dict, otherwise return a str

  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.

  • is_multilabel (bool) – If True, the tensors are assumed to be multilabel.

  • device (Union[str, torch.device]) – optional device specification for internal storage.

  • labels (Optional[List[str]]) – Optional list of label indices to include in the report

Return type


def process_function(engine, batch):
    # ...
    return y_pred, y

engine = Engine(process_function)
metric = ClassificationReport()
metric.attach(engine, "cr")
res = engine.state.metrics["cr"]
# result should be like
  "0": {
    "precision": 0.4891304347826087,
    "recall": 0.5056179775280899,
    "f1-score": 0.497237569060773
  "1": {
    "precision": 0.5157232704402516,
    "recall": 0.4992389649923896,
    "f1-score": 0.507347254447022
  "macro avg": {
    "precision": 0.5024268526114302,
    "recall": 0.5024284712602398,
    "f1-score": 0.5022924117538975