Loss#
- class ignite.metrics.Loss(loss_fn, output_transform=<function Loss.<lambda>>, batch_size=<built-in function len>, device=device(type='cpu'), skip_unrolling=False)[source]#
Calculates the average loss according to the passed loss_fn.
- Parameters
loss_fn (Callable) – a callable taking a prediction tensor, a target tensor, optionally other arguments, and returns the average loss over all observations in the batch.
output_transform (Callable) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. The output is expected to be a tuple (prediction, target) or (prediction, target, kwargs) where kwargs is a dictionary of extra keywords arguments. If extra keywords arguments are provided they are passed to loss_fn.batch_size (Callable) – a callable taking a target tensor that returns the first dimension size (usually the batch size).
device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.skip_unrolling (bool) – specifies whether input should be unrolled or not before it is passed to to loss_fn. Should be true for multi-output model, for example, if
y_pred
contains multi-ouput as(y_pred_a, y_pred_b)
- required_output_keys#
dictionary defines required keys to be found in
engine.state.output
if the latter is a dictionary. Default,("y_pred", "y", "criterion_kwargs")
. This is useful when the criterion function requires additional arguments, which can be passed usingcriterion_kwargs
. See an example below.- Type
Optional[Tuple]
Examples
Let’s implement a Loss metric that requires
x
,y_pred
,y
andcriterion_kwargs
as input forcriterion
function. In the example below we show how to setup standard metric like Accuracy and the Loss metric using anevaluator
created withcreate_supervised_evaluator()
method.For more information on how metric works with
Engine
, visit Attach Engine API.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.metrics.regression import * from ignite.utils import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
model = default_model criterion = nn.NLLLoss() metric = Loss(criterion) metric.attach(default_evaluator, 'loss') y_pred = torch.tensor([[0.1, 0.4, 0.5], [0.1, 0.7, 0.2]]) y_true = torch.tensor([2, 2]).long() state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics['loss'])
-0.3499999...
Changed in version 0.5.1:
skip_unrolling
argument is added.Methods
Computes the metric based on its accumulated state.
Resets the metric to its initial state.
Updates the metric's state using the passed batch output.
- compute()[source]#
Computes the metric based on its accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.