InceptionScore#
- class ignite.metrics.InceptionScore(num_features=None, feature_extractor=None, output_transform=<function InceptionScore.<lambda>>, device=device(type='cpu'))[source]#
Calculates Inception Score.
where is the conditional probability of image being the given object and is the marginal probability that the given image is real, G refers to the generated image and refers to KL Divergence of the above mentioned probabilities.
More details can be found in Barratt et al. 2018.
- Parameters
num_features (Optional[int]) – number of features predicted by the model or number of classes of the model. Default value is 1000.
feature_extractor (Optional[Module]) – a torch Module for predicting the probabilities from the input data. It returns a tensor of shape (batch_size, num_features). If neither
num_features
norfeature_extractor
are defined, by default we use an ImageNet pretrained Inception Model. If onlynum_features
is defined butfeature_extractor
is not defined,feature_extractor
is assigned Identity Function. Please note that the class object will be implicitly converted to device mentioned in thedevice
argument.output_transform (Callable) – a callable that is used to transform the
Engine
’sprocess_function
’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output asy_pred
.device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
update
arguments ensures theupdate
method is non-blocking. By default, CPU.
Note
The default Inception model requires the torchvision module to be installed.
Examples
metric = InceptionScore() metric.attach(default_evaluator, "is") y = torch.rand(10, 3, 299, 299) state = default_evaluator.run([y]) print(state.metrics["is"])
metric = InceptionScore(num_features=1, feature_extractor=default_model) metric.attach(default_evaluator, "is") y = torch.zeros(10, 4) state = default_evaluator.run([y]) print(state.metrics["is"])
1.0
New in version 0.4.6.
Methods
Computes the metric based on it's accumulated state.
Resets the metric to it's initial state.
Updates the metric's state using the passed batch output.
- compute()[source]#
Computes the metric based on it’s accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mapping
is returned, it will be (shallow) flattened into engine.state.metrics whencompleted()
is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.