- class ignite.metrics.Loss(loss_fn, output_transform=<function Loss.<lambda>>, batch_size=<built-in function len>, device=device(type='cpu'))#
Calculates the average loss according to the passed loss_fn.
loss_fn (Callable) – a callable taking a prediction tensor, a target tensor, optionally other arguments, and returns the average loss over all observations in the batch.
output_transform (Callable) – a callable that is used to transform the
process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. The output is expected to be a tuple (prediction, target) or (prediction, target, kwargs) where kwargs is a dictionary of extra keywords arguments. If extra keywords arguments are provided they are passed to loss_fn.
batch_size (Callable) – a callable taking a target tensor that returns the first dimension size (usually the batch size).
device (Union[str, torch.device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
updatearguments ensures the
updatemethod is non-blocking. By default, CPU.
dictionary defines required keys to be found in
engine.state.outputif the latter is a dictionary. Default,
("y_pred", "y", "criterion_kwargs"). This is useful when the criterion function requires additional arguments, which can be passed using
criterion_kwargs. See an example below.
Let’s implement a Loss metric that requires
criterion_kwargsas input for
criterionfunction. In the example below we show how to setup standard metric like Accuracy and the Loss metric using an
from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.utils import * from ignite.contrib.metrics.regression import * from ignite.contrib.metrics import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros(, requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
model = default_model criterion = nn.NLLLoss() metric = Loss(criterion) metric.attach(default_evaluator, 'loss') y_pred = torch.tensor([[0.1, 0.4, 0.5], [0.1, 0.7, 0.2]]) y_true = torch.tensor([2, 2]).long() state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics['loss'])
Computes the metric based on it's accumulated state.
Resets the metric to it's initial state.
Updates the metric's state using the passed batch output.
Computes the metric based on it’s accumulated state.
By default, this is called at the end of each epoch.
- Return type
NotComputableError – raised when the metric cannot be computed.
Resets the metric to it’s initial state.
By default, this is called at the start of each epoch.
- Return type