Shortcuts

SSIM#

class ignite.metrics.SSIM(data_range, kernel_size=(11, 11), sigma=(1.5, 1.5), k1=0.01, k2=0.03, gaussian=True, output_transform=<function SSIM.<lambda>>, device=device(type='cpu'))[source]#

Computes Structual Similarity Index Measure

  • update must receive output of the form (y_pred, y).

Parameters
  • data_range (Union[int, float]) – Range of the image. Typically, 1.0 or 255.

  • kernel_size (Union[int, Sequence[int]]) – Size of the kernel. Default: (11, 11)

  • sigma (Union[float, Sequence[float]]) – Standard deviation of the gaussian kernel. Argument is used if gaussian=True. Default: (1.5, 1.5)

  • k1 (float) – Parameter of SSIM. Default: 0.01

  • k2 (float) – Parameter of SSIM. Default: 0.03

  • gaussian (bool) – True to use gaussian kernel, False to use uniform kernel

  • output_transform (Callable) – A callable that is used to transform the Engine’s process_function’s output into the form expected by the metric.

  • device (Union[str, torch.device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your update arguments ensures the update method is non-blocking. By default, CPU.

Examples

To use with Engine and process_function, simply attach the metric instance to the engine. The output of the engine’s process_function needs to be in the format of (y_pred, y) or {'y_pred': y_pred, 'y': y, ...}. If not, output_tranform can be added to the metric to transform the output into the form expected by the metric.

y_pred and y can be un-normalized or normalized image tensors. Depending on that, the user might need to adjust data_range. y_pred and y should have the same shape.

For more information on how metric works with Engine, visit Attach Engine API.

from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.utils import *
from ignite.contrib.metrics.regression import *
from ignite.contrib.metrics import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
metric = SSIM(data_range=1.0)
metric.attach(default_evaluator, 'ssim')
preds = torch.rand([4, 3, 16, 16])
target = preds * 0.75
state = default_evaluator.run([[preds, target]])
print(state.metrics['ssim'])
0.9218971...

New in version 0.4.2.

Methods

compute

Computes the metric based on its accumulated state.

reset

Resets the metric to its initial state.

update

Updates the metric's state using the passed batch output.

compute()[source]#

Computes the metric based on its accumulated state.

By default, this is called at the end of each epoch.

Returns

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type

Any

Raises

NotComputableError – raised when the metric cannot be computed.

reset()[source]#

Resets the metric to its initial state.

By default, this is called at the start of each epoch.

Return type

None

update(output)[source]#

Updates the metric’s state using the passed batch output.

By default, this is called once for each batch.

Parameters

output (Sequence[torch.Tensor]) – the is the output from the engine’s process function.

Return type

None