Shortcuts

TopKCategoricalAccuracy#

class ignite.metrics.TopKCategoricalAccuracy(k=5, output_transform=<function TopKCategoricalAccuracy.<lambda>>, device=device(type='cpu'))[source]#

Calculates the top-k categorical accuracy.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

Parameters
  • k (int) – the k in “top-k”.

  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your update arguments ensures the update method is non-blocking. By default, CPU.

Examples

To use with Engine and process_function, simply attach the metric instance to the engine. The output of the engine’s process_function needs to be in the format of (y_pred, y) or {'y_pred': y_pred, 'y': y, ...}. If not, output_tranform can be added to the metric to transform the output into the form expected by the metric.

def process_function(engine, batch):
    y_pred, y = batch
    return y_pred, y

def one_hot_to_binary_output_transform(output):
    y_pred, y = output
    y = torch.argmax(y, dim=1)  # one-hot vector to label index vector
    return y_pred, y

engine = Engine(process_function)
metric = TopKCategoricalAccuracy(
    k=2, output_transform=one_hot_to_binary_output_transform)
metric.attach(engine, 'top_k_accuracy')

preds = torch.Tensor([
    [0.7, 0.2, 0.05, 0.05],     # 1 is in the top 2
    [0.2, 0.3, 0.4, 0.1],       # 0 is not in the top 2
    [0.4, 0.4, 0.1, 0.1],       # 0 is in the top 2
    [0.7, 0.05, 0.2, 0.05]      # 2 is in the top 2
])
target = torch.Tensor([         # targets as one-hot vectors
    [0, 1, 0, 0],
    [1, 0, 0, 0],
    [1, 0, 0, 0],
    [0, 0, 1, 0]
])

state = engine.run([[preds, target]])
print(state.metrics['top_k_accuracy'])
0.75

Methods

compute

Computes the metric based on it's accumulated state.

reset

Resets the metric to it's initial state.

update

Updates the metric's state using the passed batch output.

compute()[source]#

Computes the metric based on it’s accumulated state.

By default, this is called at the end of each epoch.

Returns

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type

Any

Raises

NotComputableError – raised when the metric cannot be computed.

reset()[source]#

Resets the metric to it’s initial state.

By default, this is called at the start of each epoch.

Return type

None

update(output)[source]#

Updates the metric’s state using the passed batch output.

By default, this is called once for each batch.

Parameters

output (Sequence[Tensor]) – the is the output from the engine’s process function.

Return type

None