SilhouetteScore#
- class ignite.metrics.clustering.SilhouetteScore(output_transform=<function SilhouetteScore.<lambda>>, check_compute_fn=True, device=device(type='cpu'), skip_unrolling=False, silhouette_kwargs=None)[source]#
Calculates the silhouette score.
The silhouette score evaluates the quality of clustering results.
where:
is the mean distance between a sample and all other points in the same cluster.
is the mean distance between a sample and all other points in the next nearest cluster.
More details can be found here.
The silhouette score ranges from -1 to +1, where the score becomes close to +1 when the clustering result is good (i.e., clusters are well-separated).
The computation of this metric is implemented with sklearn.metrics.silhouette_score.
update
must receive output of the form(features, labels)
or{'features': features, 'labels': labels}
.features and labels must be of same shape (B, D) and (B,).
Parameters are inherited from
EpochMetric.__init__
.- Parameters
output_transform (Callable[[...], Any]) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as(features, labels)
or{'features': features, 'labels': labels}
.check_compute_fn (bool) – if True,
compute_fn
is run on the first batch of data to ensure there are no issues. If issues exist, user is warned that there might be an issue with thecompute_fn
. Default, True.device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.skip_unrolling (bool) – specifies whether output should be unrolled before being fed to update method. Should be true for multi-output model, for example, if
y_pred
contains multi-ouput as(y_pred_a, y_pred_b)
Alternatively,output_transform
can be used to handle this.silhouette_kwargs (Optional[dict]) – additional arguments passed to
sklearn.metrics.silhouette_score
.
Examples
To use with
Engine
andprocess_function
, simply attach the metric instance to the engine. The output of the engine’sprocess_function
needs to be in format of(features, labels)
or{'features': features, 'labels': labels, ...}
.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.metrics.clustering import * from ignite.metrics.regression import * from ignite.utils import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
metric = SilhouetteScore() metric.attach(default_evaluator, "silhouette_score") X = torch.tensor([ [-1.04, -0.71, -1.42, -0.28, -0.43], [0.47, 0.96, -0.43, 1.57, -2.24], [-0.62, -0.29, 0.10, -0.72, -1.69], [0.96, -0.77, 0.60, -0.89, 0.49], [-1.33, -1.53, 0.25, -1.60, -2.0], [-0.63, -0.55, -1.03, -0.89, -0.77], [-0.26, -1.67, -0.24, -1.33, -0.40], [-0.20, -1.34, -0.52, -1.55, -1.50], [2.68, 1.13, 2.51, 0.80, 0.92], [0.33, 2.88, 1.35, -0.56, 1.71] ]) Y = torch.tensor([0, 0, 0, 0, 1, 1, 1, 1, 2, 2]) state = default_evaluator.run([{"features": X, "labels": Y}]) print(state.metrics["silhouette_score"])
0.12607366
New in version 0.5.2.
Methods