MeanSquaredError#
- class ignite.metrics.MeanSquaredError(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#
Calculates the mean squared error.
$\text{MSE} = \frac{1}{N} \sum_{i=1}^N \|y_{i} - x_{i}\|^2$where $y_{i}$ is the prediction tensor and $x_{i}$ is ground true tensor.
update
must receive output of the form(y_pred, y)
or{'y_pred': y_pred, 'y': y}
.
- Parameters
output_transform (Callable) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as(y_pred, y)
or{'y_pred': y_pred, 'y': y}
.device (Union[str, torch.device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.
Examples
To use with
Engine
andprocess_function
, simply attach the metric instance to the engine. The output of the engine’sprocess_function
needs to be in the format of(y_pred, y)
or{'y_pred': y_pred, 'y': y, ...}
. If not,output_tranform
can be added to the metric to transform the output into the form expected by the metric.y_pred
andy
should have the same shape.For more information on how metric works with
Engine
, visit Attach Engine API.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.utils import * from ignite.contrib.metrics.regression import * from ignite.contrib.metrics import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
metric = MeanSquaredError() metric.attach(default_evaluator, 'mse') preds = torch.tensor([ [1, 2, 4, 1], [2, 3, 1, 5], [1, 3, 5, 1], [1, 5, 1 ,11] ]) target = preds * 0.75 state = default_evaluator.run([[preds, target]]) print(state.metrics['mse'])
3.828125
Methods
Computes the metric based on it's accumulated state.
Resets the metric to it's initial state.
Updates the metric's state using the passed batch output.
- compute()[source]#
Computes the metric based on it’s accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.
- reset()[source]#
Resets the metric to it’s initial state.
By default, this is called at the start of each epoch.
- Return type
- update(output)[source]#
Updates the metric’s state using the passed batch output.
By default, this is called once for each batch.
- Parameters
output (Sequence[torch.Tensor]) – the is the output from the engine’s process function.
- Return type