Shortcuts

MedianAbsoluteError#

class ignite.metrics.regression.MedianAbsoluteError(output_transform=<function MedianAbsoluteError.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Median Absolute Error.

MdAE=MDj=1,n(AjPj)\text{MdAE} = \text{MD}_{j=1,n} \left( |A_j - P_j| \right)

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1) and of type float32.

Warning

Current implementation stores all input data (output and target) in as tensors before computing a metric. This can potentially lead to a memory error if the input data is larger than available RAM.

Parameters
  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • device (Union[str, device]) – optional device specification for internal storage.

Examples

To use with Engine and process_function, simply attach the metric instance to the engine. The output of the engine’s process_function needs to be in format of (y_pred, y) or {'y_pred': y_pred, 'y': y, ...}.

from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.metrics.regression import *
from ignite.utils import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
metric = MedianAbsoluteError()
metric.attach(default_evaluator, 'mae')
y_true = torch.tensor([0, 1, 2, 3, 4, 5])
y_pred = y_true * 0.75
state = default_evaluator.run([[y_pred, y_true]])
print(state.metrics['mae'])
0.625

Methods