Shortcuts

KendallRankCorrelation#

class ignite.metrics.regression.KendallRankCorrelation(variant='b', output_transform=<function KendallRankCorrelation.<lambda>>, check_compute_fn=True, device=device(type='cpu'), skip_unrolling=False)[source]#

Calculates the Kendall rank correlation coefficient.

τ=12(number of discordant pairs)(n2)\tau = 1-\frac{2(\text{number of discordant pairs})}{\left( \begin{array}{c}n\\2\end{array} \right)}

Two prediction-target pairs (Pi,Ai)(P_i, A_i) and (Pj,Aj)(P_j, A_j), where i<ji<j, are said to be concordant when both Pi<PjP_i<P_j and Ai<AjA_i<A_j holds or both Pi>PjP_i>P_j and Ai>AjA_i>A_j.

The number of discordant pairs counts the number of pairs that are not concordant.

The computation of this metric is implemented with scipy.stats.kendalltau.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters are inherited from Metric.__init__.

Parameters
  • variant (str) – variant of kendall rank correlation. 'b' or 'c' is accepted. Details can be found here. Default: 'b'

  • output_transform (Callable[[...], Any]) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • check_compute_fn (bool) – if True, compute_fn is run on the first batch of data to ensure there are no issues. If issues exist, user is warned that there might be an issue with the compute_fn. Default, True.

  • device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your update arguments ensures the update method is non-blocking. By default, CPU.

  • skip_unrolling (bool) – specifies whether output should be unrolled before being fed to update method. Should be true for multi-output model, for example, if y_pred contains multi-ouput as (y_pred_a, y_pred_b) Alternatively, output_transform can be used to handle this.

Examples

To use with Engine and process_function, simply attach the metric instance to the engine. The output of the engine’s process_function needs to be in format of (y_pred, y) or {'y_pred': y_pred, 'y': y, ...}.

from collections import OrderedDict

import torch
from torch import nn, optim

from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.metrics.clustering import *
from ignite.metrics.regression import *
from ignite.utils import *

# create default evaluator for doctests

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

# create default optimizer for doctests

param_tensor = torch.zeros([1], requires_grad=True)
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1)

# create default trainer for doctests
# as handlers could be attached to the trainer,
# each test must define his own trainer using `.. testsetup:`

def get_default_trainer():

    def train_step(engine, batch):
        return batch

    return Engine(train_step)

# create default model for doctests

default_model = nn.Sequential(OrderedDict([
    ('base', nn.Linear(4, 2)),
    ('fc', nn.Linear(2, 1))
]))

manual_seed(666)
metric = KendallRankCorrelation()
metric.attach(default_evaluator, 'kendall_tau')
y_true = torch.tensor([0., 1., 2., 3., 4., 5.])
y_pred = torch.tensor([0.5, 2.8, 1.9, 1.3, 6.0, 4.1])
state = default_evaluator.run([[y_pred, y_true]])
print(state.metrics['kendall_tau'])
0.4666666666666666

New in version 0.5.2.

Methods

compute

Computes the metric based on its accumulated state.

update

Updates the metric's state using the passed batch output.

compute()[source]#

Computes the metric based on its accumulated state.

By default, this is called at the end of each epoch.

Returns

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type

Any

Raises

NotComputableError – raised when the metric cannot be computed.

update(output)[source]#

Updates the metric’s state using the passed batch output.

By default, this is called once for each batch.

Parameters

output (Tuple[Tensor, Tensor]) – the is the output from the engine’s process function.

Return type

None