Shortcuts

RunningAverage#

class ignite.metrics.RunningAverage(src=None, alpha=0.98, output_transform=None, epoch_bound=True, device=None)[source]#

Compute running average of a metric or the output of process function.

Parameters
  • src (Optional[Metric]) – input source: an instance of Metric or None. The latter corresponds to engine.state.output which holds the output of process function.

  • alpha (float) – running average decay factor, default 0.98

  • output_transform (Optional[Callable]) – a function to use to transform the output if src is None and corresponds the output of process function. Otherwise it should be None.

  • epoch_bound (bool) – whether the running average should be reset after each epoch (defaults to True).

  • device (Optional[Union[str, device]]) – specifies which device updates are accumulated on. Should be None when src is an instance of Metric, as the running average will use the src’s device. Otherwise, defaults to CPU. Only applicable when the computed value from the metric is a tensor.

Examples

For more information on how metric works with Engine, visit Attach Engine API.

from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.utils import *
from ignite.contrib.metrics.regression import *
from ignite.contrib.metrics import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
default_trainer = get_default_trainer()

accuracy = Accuracy()
metric = RunningAverage(accuracy)
metric.attach(default_trainer, 'running_avg_accuracy')

@default_trainer.on(Events.ITERATION_COMPLETED)
def log_running_avg_metrics():
    print(default_trainer.state.metrics['running_avg_accuracy'])

y_true = [torch.tensor(y) for y in [[0], [1], [0], [1], [0], [1]]]
y_pred = [torch.tensor(y) for y in [[0], [0], [0], [1], [1], [1]]]

state = default_trainer.run(zip(y_pred, y_true))
1.0
0.98
0.98039...
0.98079...
0.96117...
0.96195...
default_trainer = get_default_trainer()

metric = RunningAverage(output_transform=lambda x: x.item())
metric.attach(default_trainer, 'running_avg_accuracy')

@default_trainer.on(Events.ITERATION_COMPLETED)
def log_running_avg_metrics():
    print(default_trainer.state.metrics['running_avg_accuracy'])

y = [torch.tensor(y) for y in [[0], [1], [0], [1], [0], [1]]]

state = default_trainer.run(y)
0.0
0.020000...
0.019600...
0.039208...
0.038423...
0.057655...

Methods

attach

Attaches current metric to provided engine.

compute

Computes the metric based on it's accumulated state.

reset

Resets the metric to it's initial state.

update

Updates the metric's state using the passed batch output.

attach(engine, name, _usage=<ignite.metrics.metric.EpochWise object>)[source]#

Attaches current metric to provided engine. On the end of engine’s run, engine.state.metrics dictionary will contain computed metric’s value under provided name.

Parameters
Return type

None

Examples

metric = ...
metric.attach(engine, "mymetric")

assert "mymetric" in engine.run(data).metrics

assert metric.is_attached(engine)

Example with usage:

metric = ...
metric.attach(engine, "mymetric", usage=BatchWise.usage_name)

assert "mymetric" in engine.run(data).metrics

assert metric.is_attached(engine, usage=BatchWise.usage_name)
compute()[source]#

Computes the metric based on it’s accumulated state.

By default, this is called at the end of each epoch.

Returns

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type

Any

Raises

NotComputableError – raised when the metric cannot be computed.

required_output_keys: Optional[Tuple] = None#
reset()[source]#

Resets the metric to it’s initial state.

By default, this is called at the start of each epoch.

Return type

None

update(output)[source]#

Updates the metric’s state using the passed batch output.

By default, this is called once for each batch.

Parameters

output (Sequence) – the is the output from the engine’s process function.

Return type

None