Shortcuts

RunningAverage#

class ignite.metrics.RunningAverage(src=None, alpha=0.98, output_transform=None, epoch_bound=None, device=None)[source]#

Compute running average of a metric or the output of process function.

Parameters
  • src (Optional[Metric]) – input source: an instance of Metric or None. The latter corresponds to engine.state.output which holds the output of process function.

  • alpha (float) – running average decay factor, default 0.98

  • output_transform (Optional[Callable]) – a function to use to transform the output if src is None and corresponds the output of process function. Otherwise it should be None.

  • epoch_bound (Optional[bool]) – whether the running average should be reset after each epoch. It is depracated in favor of usage argument in attach() method. Setting epoch_bound to False is equivalent to usage=SingleEpochRunningBatchWise() and setting it to True is equivalent to usage=RunningBatchWise() in the attach() method. Default None.

  • device (Optional[Union[str, device]]) – specifies which device updates are accumulated on. Should be None when src is an instance of Metric, as the running average will use the src’s device. Otherwise, defaults to CPU. Only applicable when the computed value from the metric is a tensor.

Examples

For more information on how metric works with Engine, visit Attach Engine API.

from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.metrics.regression import *
from ignite.utils import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
default_trainer = get_default_trainer()

accuracy = Accuracy()
metric = RunningAverage(accuracy)
metric.attach(default_trainer, 'running_avg_accuracy')

@default_trainer.on(Events.ITERATION_COMPLETED)
def log_running_avg_metrics():
    print(default_trainer.state.metrics['running_avg_accuracy'])

y_true = [torch.tensor(y) for y in [[0], [1], [0], [1], [0], [1]]]
y_pred = [torch.tensor(y) for y in [[0], [0], [0], [1], [1], [1]]]

state = default_trainer.run(zip(y_pred, y_true))
1.0
0.98
0.98039...
0.98079...
0.96117...
0.96195...
default_trainer = get_default_trainer()

metric = RunningAverage(output_transform=lambda x: x.item())
metric.attach(default_trainer, 'running_avg_accuracy')

@default_trainer.on(Events.ITERATION_COMPLETED)
def log_running_avg_metrics():
    print(default_trainer.state.metrics['running_avg_accuracy'])

y = [torch.tensor(y) for y in [[0], [1], [0], [1], [0], [1]]]

state = default_trainer.run(y)
0.0
0.020000...
0.019600...
0.039208...
0.038423...
0.057655...

Methods

attach

Attach the metric to the engine using the events determined by the usage.

compute

Computes the metric based on its accumulated state.

detach

Detaches current metric from the engine and no metric's computation is done during the run.

reset

Resets the metric to its initial state.

update

Updates the metric's state using the passed batch output.

attach(engine, name, usage=<ignite.metrics.metric.RunningBatchWise object>)[source]#

Attach the metric to the engine using the events determined by the usage.

Parameters
  • engine (Engine) – the engine to get attached to.

  • name (str) – by which, the metric is inserted into engine.state.metrics dictionary.

  • usage (Union[str, MetricUsage]) –

    the usage determining on which events the metric is reset, updated and computed. It should be an instance of the MetricUsages in the following table.

    usage class

    Description

    RunningBatchWise

    Running average of the src metric or engine.state.output is computed across batches. In the former case, on each batch, src is reset, updated and computed then its value is retrieved. Default.

    SingleEpochRunningBatchWise

    Same as above but the running average is computed across batches in an epoch so it is reset at the end of the epoch.

    RunningEpochWise

    Running average of the src metric or engine.state.output is computed across epochs. In the former case, src works as if it was attached in a EpochWise manner and its computed value is retrieved at the end of the epoch. The latter case doesn’t make much sense for this usage as the engine.state.output of the last batch is retrieved then.

Return type

None

RunningAverage retrieves engine.state.output at usage.ITERATION_COMPLETED if the src is not given and it’s computed and updated using src, by manually calling its compute method, or engine.state.output at usage.COMPLETED event. Also if src is given, it is updated at usage.ITERATION_COMPLETED, but its reset event is determined by usage type. If isinstance(usage, BatchWise) holds true, src is reset on BatchWise().STARTED, otherwise on EpochWise().STARTED if isinstance(usage, EpochWise).

Changed in version 0.5.1: Added usage argument

compute()[source]#

Computes the metric based on its accumulated state.

By default, this is called at the end of each epoch.

Returns

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type

Any

Raises

NotComputableError – raised when the metric cannot be computed.

detach(engine, usage=<ignite.metrics.metric.RunningBatchWise object>)[source]#

Detaches current metric from the engine and no metric’s computation is done during the run. This method in conjunction with attach() can be useful if several metrics need to be computed with different periods. For example, one metric is computed every training epoch and another metric (e.g. more expensive one) is done every n-th training epoch.

Parameters
  • engine (Engine) – the engine from which the metric must be detached

  • usage (Union[str, MetricUsage]) – the usage of the metric. Valid string values should be ‘epoch_wise’ (default) or ‘batch_wise’.

Return type

None

Examples

metric = ...
engine = ...
metric.detach(engine)

assert "mymetric" not in engine.run(data).metrics

assert not metric.is_attached(engine)

Example with usage:

metric = ...
engine = ...
metric.detach(engine, usage="batch_wise")

assert "mymetric" not in engine.run(data).metrics

assert not metric.is_attached(engine, usage="batch_wise")
required_output_keys: Optional[Tuple] = None#
reset()[source]#

Resets the metric to its initial state.

By default, this is called at the start of each epoch.

Return type

None

update(output)[source]#

Updates the metric’s state using the passed batch output.

By default, this is called once for each batch.

Parameters

output (Union[Tensor, float]) – the is the output from the engine’s process function.

Return type

None