Shortcuts

Metric#

class ignite.metrics.metric.Metric(output_transform=<function Metric.<lambda>>, device=device(type='cpu'), skip_unrolling=False)[source]#

Base class for all Metrics.

Parameters
  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your update arguments ensures the update method is non-blocking. By default, CPU.

  • skip_unrolling (bool) –

    specifies whether output should be unrolled before being fed to update method. Should be true for multi-output model, for example, if y_pred contains multi-ouput as (y_pred_a, y_pred_b) Alternatively, output_transform can be used to handle this.

    Examples

    The following example shows a custom loss metric that expects input from a multi-output model.

    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    
    from ignite.engine import create_supervised_evaluator
    from ignite.metrics import Loss
    
    class MyLoss(nn.Module):
        def __init__(self, ca: float = 1.0, cb: float = 1.0) -> None:
            super().__init__()
            self.ca = ca
            self.cb = cb
    
        def forward(self,
                    y_pred: Tuple[torch.Tensor, torch.Tensor],
                    y_true: Tuple[torch.Tensor, torch.Tensor]) -> torch.Tensor:
            a_true, b_true = y_true
            a_pred, b_pred = y_pred
            return self.ca * F.mse_loss(a_pred, a_true) + self.cb * F.cross_entropy(b_pred, b_true)
    
    
    def prepare_batch(batch, device, non_blocking):
        return torch.rand(4, 1), (torch.rand(4, 1), torch.rand(4, 2))
    
    
    class MyModel(nn.Module):
    
        def forward(self, x):
            return torch.rand(4, 1), torch.rand(4, 2)
    
    
    model = MyModel()
    
    device = "cpu"
    loss = MyLoss(0.5, 1.0)
    metrics = {
        "Loss": Loss(loss, skip_unrolling=True)
    }
    train_evaluator = create_supervised_evaluator(model, metrics, device, prepare_batch=prepare_batch)
    
    
    data = range(10)
    train_evaluator.run(data)
    train_evaluator.state.metrics["Loss"]
    

required_output_keys#

dictionary defines required keys to be found in engine.state.output if the latter is a dictionary. By default, ("y_pred", "y"). This is useful with custom metrics that can require other arguments than predictions y_pred and targets y. See an example below.

Type

Optional[Tuple]

Examples

Let’s implement a custom metric that requires y_pred, y and x as input for update function. In the example below we show how to setup standard metric like Accuracy and the custom metric using by an evaluator created with create_supervised_evaluator() method.

For more information on how metric works with Engine, visit Attach Engine API.

# https://discuss.pytorch.org/t/how-access-inputs-in-custom-ignite-metric/91221/5

import torch
import torch.nn as nn

from ignite.metrics import Metric, Accuracy
from ignite.engine import create_supervised_evaluator

class CustomMetric(Metric):

    required_output_keys = ("y_pred", "y", "x")

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)

    def update(self, output):
        y_pred, y, x = output
        # ...

    def reset(self):
        # ...
        pass

    def compute(self):
        # ...
        pass

model = ...

metrics = {
    "Accuracy": Accuracy(),
    "CustomMetric": CustomMetric()
}

evaluator = create_supervised_evaluator(
    model,
    metrics=metrics,
    output_transform=lambda x, y, y_pred: {"x": x, "y": y, "y_pred": y_pred}
)

res = evaluator.run(data)

Changed in version 0.4.2: required_output_keys became public attribute.

Changed in version 0.5.1: skip_unrolling argument is added.

Methods

attach

Attaches current metric to provided engine.

completed

Helper method to compute metric's value and put into the engine.

compute

Computes the metric based on its accumulated state.

detach

Detaches current metric from the engine and no metric's computation is done during the run.

is_attached

Checks if current metric is attached to provided engine.

iteration_completed

Helper method to update metric's computation.

load_state_dict

Method replaces internal state of the class with provided state dict data.

reset

Resets the metric to its initial state.

started

Helper method to start data gathering for metric's computation.

state_dict

Method returns state dict with attributes of the metric specified in its _state_dict_all_req_keys attribute.

update

Updates the metric's state using the passed batch output.

attach(engine, name, usage=<ignite.metrics.metric.EpochWise object>)[source]#

Attaches current metric to provided engine. On the end of engine’s run, engine.state.metrics dictionary will contain computed metric’s value under provided name.

Parameters
Return type

None

Examples

metric = ...
metric.attach(engine, "mymetric")

assert "mymetric" in engine.run(data).metrics

assert metric.is_attached(engine)

Example with usage:

metric = ...
metric.attach(engine, "mymetric", usage=BatchWise.usage_name)

assert "mymetric" in engine.run(data).metrics

assert metric.is_attached(engine, usage=BatchWise.usage_name)
completed(engine, name)[source]#

Helper method to compute metric’s value and put into the engine. It is automatically attached to the engine with attach(). If metrics’ value is torch tensor, it is explicitly sent to CPU device.

Parameters
  • engine (Engine) – the engine to which the metric must be attached

  • name (str) – the name of the metric used as key in dict engine.state.metrics

Return type

None

Changed in version 0.4.3: Added dict in metrics results.

Changed in version 0.4.5: metric’s value is put on CPU if torch tensor.

abstract compute()[source]#

Computes the metric based on its accumulated state.

By default, this is called at the end of each epoch.

Returns

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type

Any

Raises

NotComputableError – raised when the metric cannot be computed.

detach(engine, usage=<ignite.metrics.metric.EpochWise object>)[source]#

Detaches current metric from the engine and no metric’s computation is done during the run. This method in conjunction with attach() can be useful if several metrics need to be computed with different periods. For example, one metric is computed every training epoch and another metric (e.g. more expensive one) is done every n-th training epoch.

Parameters
  • engine (Engine) – the engine from which the metric must be detached

  • usage (Union[str, MetricUsage]) – the usage of the metric. Valid string values should be ‘epoch_wise’ (default) or ‘batch_wise’.

Return type

None

Examples

metric = ...
engine = ...
metric.detach(engine)

assert "mymetric" not in engine.run(data).metrics

assert not metric.is_attached(engine)

Example with usage:

metric = ...
engine = ...
metric.detach(engine, usage="batch_wise")

assert "mymetric" not in engine.run(data).metrics

assert not metric.is_attached(engine, usage="batch_wise")
is_attached(engine, usage=<ignite.metrics.metric.EpochWise object>)[source]#

Checks if current metric is attached to provided engine. If attached, metric’s computed value is written to engine.state.metrics dictionary.

Parameters
  • engine (Engine) – the engine checked from which the metric should be attached

  • usage (Union[str, MetricUsage]) – the usage of the metric. Valid string values should be ‘epoch_wise’ (default) or ‘batch_wise’.

Return type

bool

iteration_completed(engine)[source]#

Helper method to update metric’s computation. It is automatically attached to the engine with attach().

Parameters

engine (Engine) – the engine to which the metric must be attached

Return type

None

Note

engine.state.output is used to compute metric values. The majority of implemented metrics accept the following formats for engine.state.output: (y_pred, y) or {'y_pred': y_pred, 'y': y}. y_pred and y can be torch tensors or list of tensors/numbers if applicable.

Changed in version 0.4.5: y_pred and y can be torch tensors or list of tensors/numbers

load_state_dict(state_dict)[source]#

Method replaces internal state of the class with provided state dict data.

If there’s an active distributed configuration, the process uses its rank to pick the proper value from the list of values saved under each attribute’s name in the dict.

Parameters

state_dict (Mapping) – a dict containing attributes of the metric specified in its _state_dict_all_req_keys attribute.

Return type

None

abstract reset()[source]#

Resets the metric to its initial state.

By default, this is called at the start of each epoch.

Return type

None

started(engine)[source]#

Helper method to start data gathering for metric’s computation. It is automatically attached to the engine with attach().

Parameters

engine (Engine) – the engine to which the metric must be attached

Return type

None

state_dict()[source]#

Method returns state dict with attributes of the metric specified in its _state_dict_all_req_keys attribute. Can be used to save internal state of the class.

Return type

OrderedDict

abstract update(output)[source]#

Updates the metric’s state using the passed batch output.

By default, this is called once for each batch.

Parameters

output (Any) – the is the output from the engine’s process function.

Return type

None