KendallRankCorrelation#
- class ignite.metrics.regression.KendallRankCorrelation(variant='b', output_transform=<function KendallRankCorrelation.<lambda>>, check_compute_fn=True, device=device(type='cpu'), skip_unrolling=False)[source]#
Calculates the Kendall rank correlation coefficient.
Two prediction-target pairs and , where , are said to be concordant when both and holds or both and .
The number of discordant pairs counts the number of pairs that are not concordant.
The computation of this metric is implemented with scipy.stats.kendalltau.
update
must receive output of the form(y_pred, y)
or{'y_pred': y_pred, 'y': y}
.y and y_pred must be of same shape (N, ) or (N, 1).
Parameters are inherited from
Metric.__init__
.- Parameters
variant (str) – variant of kendall rank correlation.
'b'
or'c'
is accepted. Details can be found here. Default:'b'
output_transform (Callable[[...], Any]) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as(y_pred, y)
or{'y_pred': y_pred, 'y': y}
.check_compute_fn (bool) – if True,
compute_fn
is run on the first batch of data to ensure there are no issues. If issues exist, user is warned that there might be an issue with thecompute_fn
. Default, True.device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.skip_unrolling (bool) – specifies whether output should be unrolled before being fed to update method. Should be true for multi-output model, for example, if
y_pred
contains multi-ouput as(y_pred_a, y_pred_b)
Alternatively,output_transform
can be used to handle this.
Examples
To use with
Engine
andprocess_function
, simply attach the metric instance to the engine. The output of the engine’sprocess_function
needs to be in format of(y_pred, y)
or{'y_pred': y_pred, 'y': y, ...}
.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.metrics.clustering import * from ignite.metrics.regression import * from ignite.utils import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
metric = KendallRankCorrelation() metric.attach(default_evaluator, 'kendall_tau') y_true = torch.tensor([0., 1., 2., 3., 4., 5.]) y_pred = torch.tensor([0.5, 2.8, 1.9, 1.3, 6.0, 4.1]) state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics['kendall_tau'])
0.4666666666666666
New in version 0.5.2.
Methods
Computes the metric based on its accumulated state.
Updates the metric's state using the passed batch output.
- compute()[source]#
Computes the metric based on its accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.