MultiLabelConfusionMatrix#
- class ignite.metrics.MultiLabelConfusionMatrix(num_classes, output_transform=<function MultiLabelConfusionMatrix.<lambda>>, device=device(type='cpu'), normalized=False)[source]#
Calculates a confusion matrix for multi-labelled, multi-class data.
update
must receive output of the form(y_pred, y)
.y_pred must contain 0s and 1s and has the following shape (batch_size, num_classes, …). For example, y_pred[i, j] = 1 denotes that the j’th class is one of the labels of the i’th sample as predicted.
y should have the following shape (batch_size, num_classes, …) with 0s and 1s. For example, y[i, j] = 1 denotes that the j’th class is one of the labels of the i’th sample according to the ground truth.
both y and y_pred must be torch Tensors having any of the following types: {torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64}. They must have the same dimensions.
The confusion matrix ‘M’ is of dimension (num_classes, 2, 2).
M[i, 0, 0] corresponds to count/rate of true negatives of class i
M[i, 0, 1] corresponds to count/rate of false positives of class i
M[i, 1, 0] corresponds to count/rate of false negatives of class i
M[i, 1, 1] corresponds to count/rate of true positives of class i
The classes present in M are indexed as 0, … , num_classes-1 as can be inferred from above.
- Parameters
num_classes (int) – Number of classes, should be > 1.
output_transform (Callable) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.normalized (bool) – whether to normalize confusion matrix by its sum or not.
Example
For more information on how metric works with
Engine
, visit Attach Engine API.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.utils import * from ignite.contrib.metrics.regression import * from ignite.contrib.metrics import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
metric = MultiLabelConfusionMatrix(num_classes=3) metric.attach(default_evaluator, "mlcm") y_true = torch.tensor([ [0, 0, 1], [0, 0, 0], [0, 0, 0], [1, 0, 0], [0, 1, 1], ]) y_pred = torch.tensor([ [1, 1, 0], [1, 0, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], ]) state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics["mlcm"])
tensor([[[0, 4], [0, 1]], [[3, 1], [0, 1]], [[1, 2], [2, 0]]])
New in version 0.4.5.
Methods
Computes the metric based on its accumulated state.
Resets the metric to its initial state.
Updates the metric's state using the passed batch output.
- compute()[source]#
Computes the metric based on its accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.