RunningAverage#
- class ignite.metrics.RunningAverage(src=None, alpha=0.98, output_transform=None, epoch_bound=True, device=None)[source]#
Compute running average of a metric or the output of process function.
- Parameters
src (Optional[Metric]) – input source: an instance of
Metric
or None. The latter corresponds to engine.state.output which holds the output of process function.alpha (float) – running average decay factor, default 0.98
output_transform (Optional[Callable]) – a function to use to transform the output if src is None and corresponds the output of process function. Otherwise it should be None.
epoch_bound (bool) – whether the running average should be reset after each epoch (defaults to True).
device (Optional[Union[device, str]]) – specifies which device updates are accumulated on. Should be None when
src
is an instance ofMetric
, as the running average will use thesrc
’s device. Otherwise, defaults to CPU. Only applicable when the computed value from the metric is a tensor.
Examples
For more information on how metric works with
Engine
, visit Attach Engine API.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.utils import * from ignite.contrib.metrics.regression import * from ignite.contrib.metrics import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
default_trainer = get_default_trainer() accuracy = Accuracy() metric = RunningAverage(accuracy) metric.attach(default_trainer, 'running_avg_accuracy') @default_trainer.on(Events.ITERATION_COMPLETED) def log_running_avg_metrics(): print(default_trainer.state.metrics['running_avg_accuracy']) y_true = [torch.tensor(y) for y in [[0], [1], [0], [1], [0], [1]]] y_pred = [torch.tensor(y) for y in [[0], [0], [0], [1], [1], [1]]] state = default_trainer.run(zip(y_pred, y_true))
1.0 0.98 0.98039... 0.98079... 0.96117... 0.96195...
default_trainer = get_default_trainer() metric = RunningAverage(output_transform=lambda x: x.item()) metric.attach(default_trainer, 'running_avg_accuracy') @default_trainer.on(Events.ITERATION_COMPLETED) def log_running_avg_metrics(): print(default_trainer.state.metrics['running_avg_accuracy']) y = [torch.tensor(y) for y in [[0], [1], [0], [1], [0], [1]]] state = default_trainer.run(y)
0.0 0.020000... 0.019600... 0.039208... 0.038423... 0.057655...
Methods
Attaches current metric to provided engine.
Computes the metric based on it's accumulated state.
Resets the metric to it's initial state.
Updates the metric's state using the passed batch output.
- attach(engine, name, _usage=<ignite.metrics.metric.EpochWise object>)[source]#
Attaches current metric to provided engine. On the end of engine’s run, engine.state.metrics dictionary will contain computed metric’s value under provided name.
- Parameters
engine (Engine) – the engine to which the metric must be attached
name (str) – the name of the metric to attach
usage – the usage of the metric. Valid string values should be
ignite.metrics.metric.EpochWise.usage_name
(default) orignite.metrics.metric.BatchWise.usage_name
._usage (Union[str, MetricUsage]) –
- Return type
None
Examples
metric = ... metric.attach(engine, "mymetric") assert "mymetric" in engine.run(data).metrics assert metric.is_attached(engine)
Example with usage:
metric = ... metric.attach(engine, "mymetric", usage=BatchWise.usage_name) assert "mymetric" in engine.run(data).metrics assert metric.is_attached(engine, usage=BatchWise.usage_name)
- compute()[source]#
Computes the metric based on it’s accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.
- required_output_keys: Optional[Tuple] = None#