Shortcuts

Average#

class ignite.metrics.Average(output_transform=<function Average.<lambda>>, device=device(type='cpu'))[source]#

Helper class to compute arithmetic average of a single variable.

  • update must receive output of the form x.

  • x can be a number or torch.Tensor.

Note

Number of samples is updated following the rule:

  • +1 if input is a number

  • +1 if input is a 1D torch.Tensor

  • +batch_size if input is an ND torch.Tensor. Batch size is the first dimension (shape[0]).

For input x being an ND torch.Tensor with N > 1, the first dimension is seen as the number of samples and is summed up and added to the accumulator: accumulator += x.sum(dim=0)

output_tranform can be added to the metric to transform the output into the form expected by the metric.

Parameters
  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.

  • device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your update arguments ensures the update method is non-blocking. By default, CPU.

Examples

For more information on how metric works with Engine, visit Attach Engine API.

from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.utils import *
from ignite.contrib.metrics.regression import *
from ignite.contrib.metrics import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
metric = Average()
metric.attach(default_evaluator, 'avg')
# Case 1. input is er
data = torch.tensor([0, 1, 2, 3, 4])
state = default_evaluator.run(data)
print(state.metrics['avg'])
2.0
metric = Average()
metric.attach(default_evaluator, 'avg')
# Case 2. input is a 1D torch.Tensor
data = torch.tensor([
    [0, 0, 0],
    [1, 1, 1],
    [2, 2, 2],
    [3, 3, 3]
])
state = default_evaluator.run(data)
print(state.metrics['avg'])
tensor([1.5000, 1.5000, 1.5000], dtype=torch.float64)
metric = Average()
metric.attach(default_evaluator, 'avg')
# Case 3. input is a ND torch.Tensor
data = [
    torch.tensor([[0, 0, 0], [1, 1, 1]]),
    torch.tensor([[2, 2, 2], [3, 3, 3]])
]
state = default_evaluator.run(data)
print(state.metrics['avg'])
tensor([1.5000, 1.5000, 1.5000], dtype=torch.float64)

Methods

compute

Computes the metric based on its accumulated state.

compute()[source]#

Computes the metric based on its accumulated state.

By default, this is called at the end of each epoch.

Returns

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type

Any

Raises

NotComputableError – raised when the metric cannot be computed.