Shortcuts

Loss#

class ignite.metrics.Loss(loss_fn, output_transform=<function Loss.<lambda>>, batch_size=<built-in function len>, device=device(type='cpu'))[source]#

Calculates the average loss according to the passed loss_fn.

Parameters
  • loss_fn (Callable) – a callable taking a prediction tensor, a target tensor, optionally other arguments, and returns the average loss over all observations in the batch.

  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. The output is expected to be a tuple (prediction, target) or (prediction, target, kwargs) where kwargs is a dictionary of extra keywords arguments. If extra keywords arguments are provided they are passed to loss_fn.

  • batch_size (Callable) – a callable taking a target tensor that returns the first dimension size (usually the batch size).

  • device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your update arguments ensures the update method is non-blocking. By default, CPU.

required_output_keys#

dictionary defines required keys to be found in engine.state.output if the latter is a dictionary. Default, ("y_pred", "y", "criterion_kwargs"). This is useful when the criterion function requires additional arguments, which can be passed using criterion_kwargs. See an example below.

Type

Optional[Tuple]

Examples

Let’s implement a Loss metric that requires x, y_pred, y and criterion_kwargs as input for criterion function. In the example below we show how to setup standard metric like Accuracy and the Loss metric using an evaluator created with create_supervised_evaluator() method.

For more information on how metric works with Engine, visit Attach Engine API.

from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.utils import *
from ignite.contrib.metrics.regression import *
from ignite.contrib.metrics import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
model = default_model
criterion = nn.NLLLoss()
metric = Loss(criterion)
metric.attach(default_evaluator, 'loss')
y_pred = torch.tensor([[0.1, 0.4, 0.5], [0.1, 0.7, 0.2]])
y_true = torch.tensor([2, 2]).long()
state = default_evaluator.run([[y_pred, y_true]])
print(state.metrics['loss'])
-0.3499999...

Methods

compute

Computes the metric based on it's accumulated state.

reset

Resets the metric to it's initial state.

update

Updates the metric's state using the passed batch output.

compute()[source]#

Computes the metric based on it’s accumulated state.

By default, this is called at the end of each epoch.

Returns

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type

Any

Raises

NotComputableError – raised when the metric cannot be computed.

reset()[source]#

Resets the metric to it’s initial state.

By default, this is called at the start of each epoch.

Return type

None

update(output)[source]#

Updates the metric’s state using the passed batch output.

By default, this is called once for each batch.

Parameters

output (Sequence[Union[Tensor, Dict]]) – the is the output from the engine’s process function.

Return type

None