Shortcuts

MeanAveragePrecision

class ignite.metrics.MeanAveragePrecision(rec_thresholds=None, class_mean='macro', is_multilabel=False, output_transform=<function MeanAveragePrecision.<lambda>>, device=device(type='cpu'), skip_unrolling=False)[source]

Calculate the mean average precision metric i.e. mean of the averaged-over-recall precision for classification task:

Average Precision=k=1#rec_thresholds(rkrk1)Pk\text{Average Precision} = \sum_{k=1}^{\#rec\_thresholds} (r_k - r_{k-1}) P_k

Mean average precision attempts to give a measure of detector or classifier precision at various sensivity levels a.k.a recall thresholds. This is done by summing precisions at different recall thresholds weighted by the change in recall, as if the area under precision-recall curve is being computed. Mean average precision is then computed by taking the mean of this average precision over different classes.

All the binary, multiclass and multilabel data are supported. In the latter case, is_multilabel should be set to true.

mean in the mean average precision accounts for mean of the average precision across classes. class_mean determines how to take this mean.

Parameters
  • rec_thresholds (Optional[Union[Sequence[float], Tensor]]) – recall thresholds (sensivity levels) to be considered for computing Mean Average Precision. It could be a 1-dim tensor or a sequence of floats. Its values should be between 0 and 1 and don’t need to be sorted. If missing, thresholds are considered automatically using the data.

  • class_mean (Optional[Literal['micro', 'macro', 'weighted']]) –

    how to compute mean of the average precision across classes or incorporate class dimension into computing precision. It’s ignored in binary classification. Available options are

    None

    A 1-dimensional tensor of mean (taken across additional mean dimensions) average precision per class is returned. If there’s no ground truth sample for a class, 0 is returned for that.

    ’micro’

    Precision is computed counting stats of classes/labels altogether. This option incorporates class in the very precision measurement.

    Micro P=c=1CTPcc=1CTPc+FPc\text{Micro P} = \frac{\sum_{c=1}^C TP_c}{\sum_{c=1}^C TP_c+FP_c}

    where CC is the number of classes/labels. cc in TPcTP_c and FPcFP_c means that the terms are computed for class/label cc (in a one-vs-rest sense in multiclass case).

    For multiclass inputs, this is equivalent to mean average accuracy.

    ’weighted’

    like macro but considers class/label imbalance. For multiclass input, it computes AP for each class then returns mean of them weighted by support of classes (number of actual samples in each class). For multilabel input, it computes AP for each label then returns mean of them weighted by support of labels (number of actual positive samples in each label).

    ’macro’

    computes macro precision which is unweighted mean of AP computed across classes/labels. Default.

  • is_multilabel (bool) – determines if the data is multilabel or not. Default False.

  • output_transform (Callable) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. This metric requires the output as (y_pred, y).

  • device (Union[str, device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your update arguments ensures the update method is non-blocking. By default, CPU.

  • skip_unrolling (bool) – specifies whether output should be unrolled before being fed to update method. Should be true for multi-output model, for example, if y_pred and y contain multi-ouput as (y_pred_a, y_pred_b) and (y_a, y_b), in which case the update method is called for (y_pred_a, y_a) and (y_pred_b, y_b).Alternatively, output_transform can be used to handle this.

New in version 0.5.2.

Methods

compute

Compute method of the metric

reset

Reset method of the metric

update

Metric update function using prediction and target.

compute()[source]

Compute method of the metric

Return type

Union[Tensor, float]

reset()[source]

Reset method of the metric

Return type

None

update(output)[source]

Metric update function using prediction and target.

Parameters

output (Tuple[Tensor, Tensor]) –

a binary tuple consisting of prediction and target tensors

This metric follows the same rules on output members shape as the Precision.update except for y_pred of binary and multilabel data which should be comprised of positive class probabilities here.

Return type

None