• Docs >
  • ignite.contrib.metrics
Shortcuts

ignite.contrib.metrics#

Contribution module of metrics

AveragePrecision

Computes Average Precision accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.average_precision_score .

GpuInfo

Provides GPU information: a) used memory percentage, b) gpu utilization percentage values as Metric on each iterations.

PrecisionRecallCurve

Compute precision-recall pairs for different probability thresholds for binary classification task by accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.precision_recall_curve .

ROC_AUC

Computes Area Under the Receiver Operating Characteristic Curve (ROC AUC) accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.roc_auc_score .

RocCurve

Compute Receiver operating characteristic (ROC) for binary classification task by accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.roc_curve .

class ignite.contrib.metrics.AveragePrecision(output_transform=<function AveragePrecision.<lambda>>, check_compute_fn=False)[source]#

Computes Average Precision accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.average_precision_score .

Parameters
  • output_transform (callable, optional) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.

  • check_compute_fn (bool) – Default False. If True, average_precision_score is run on the first batch of data to ensure there are no issues. User will be warned in case there are any issues computing the function.

AveragePrecision expects y to be comprised of 0’s and 1’s. y_pred must either be probability estimates or confidence values. To apply an activation to y_pred, use output_transform as shown below:

def activated_output_transform(output):
    y_pred, y = output
    y_pred = torch.softmax(y_pred, dim=1)
    return y_pred, y

avg_precision = AveragePrecision(activated_output_transform)
class ignite.contrib.metrics.GpuInfo[source]#

Provides GPU information: a) used memory percentage, b) gpu utilization percentage values as Metric on each iterations.

Note

In case if gpu utilization reports “N/A” on a given GPU, corresponding metric value is not set.

Examples

# Default GPU measurements
GpuInfo().attach(trainer, name='gpu')  # metric names are 'gpu:X mem(%)', 'gpu:X util(%)'

# Logging with TQDM
ProgressBar(persist=True).attach(trainer, metric_names=['gpu:0 mem(%)', 'gpu:0 util(%)'])
# Progress bar will looks like
# Epoch [2/10]: [12/24]  50%|█████      , gpu:0 mem(%)=79, gpu:0 util(%)=59 [00:17<1:23]

# Logging with Tensorboard
tb_logger.attach(trainer,
                 log_handler=OutputHandler(tag="training", metric_names='all'),
                 event_name=Events.ITERATION_COMPLETED)
attach(engine, name='gpu', event_name=Events.ITERATION_COMPLETED)[source]#

Attaches current metric to provided engine. On the end of engine’s run, engine.state.metrics dictionary will contain computed metric’s value under provided name.

Parameters
Return type

None

Example:

metric = ...
metric.attach(engine, "mymetric")

assert "mymetric" in engine.run(data).metrics

assert metric.is_attached(engine)

Example with usage:

metric = ...
metric.attach(engine, "mymetric", usage=BatchWise.usage_name)

assert "mymetric" in engine.run(data).metrics

assert metric.is_attached(engine, usage=BatchWise.usage_name)
completed(engine, name)[source]#

Helper method to compute metric’s value and put into the engine. It is automatically attached to the engine with attach().

Parameters
  • engine (Engine) – the engine to which the metric must be attached

  • name (str) – the name of the metric used as key in dict engine.state.metrics

Return type

None

Changed in version 0.4.3: Added dict in metrics results.

compute()[source]#

Computes the metric based on it’s accumulated state.

By default, this is called at the end of each epoch.

Returns

the actual quantity of interest. However, if a Mapping is returned, it will be (shallow) flattened into engine.state.metrics when completed() is called.

Return type

Any

Raises

NotComputableError – raised when the metric cannot be computed.

reset()[source]#

Resets the metric to it’s initial state.

By default, this is called at the start of each epoch.

Return type

None

update(output)[source]#

Updates the metric’s state using the passed batch output.

By default, this is called once for each batch.

Parameters

output (Tuple[Tensor, Tensor]) – the is the output from the engine’s process function.

Return type

None

class ignite.contrib.metrics.PrecisionRecallCurve(output_transform=<function PrecisionRecallCurve.<lambda>>, check_compute_fn=False)[source]#

Compute precision-recall pairs for different probability thresholds for binary classification task by accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.precision_recall_curve .

Parameters
  • output_transform (callable, optional) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.

  • check_compute_fn (bool) – Default False. If True, precision_recall_curve is run on the first batch of data to ensure there are no issues. User will be warned in case there are any issues computing the function.

PrecisionRecallCurve expects y to be comprised of 0’s and 1’s. y_pred must either be probability estimates or confidence values. To apply an activation to y_pred, use output_transform as shown below:

def activated_output_transform(output):
    y_pred, y = output
    y_pred = torch.sigmoid(y_pred)
    return y_pred, y

roc_auc = PrecisionRecallCurve(activated_output_transform)
class ignite.contrib.metrics.ROC_AUC(output_transform=<function ROC_AUC.<lambda>>, check_compute_fn=False)[source]#

Computes Area Under the Receiver Operating Characteristic Curve (ROC AUC) accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.roc_auc_score .

Parameters
  • output_transform (callable, optional) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.

  • check_compute_fn (bool) – Default False. If True, roc_curve is run on the first batch of data to ensure there are no issues. User will be warned in case there are any issues computing the function.

ROC_AUC expects y to be comprised of 0’s and 1’s. y_pred must either be probability estimates or confidence values. To apply an activation to y_pred, use output_transform as shown below:

def activated_output_transform(output):
    y_pred, y = output
    y_pred = torch.sigmoid(y_pred)
    return y_pred, y

roc_auc = ROC_AUC(activated_output_transform)
class ignite.contrib.metrics.RocCurve(output_transform=<function RocCurve.<lambda>>, check_compute_fn=False)[source]#

Compute Receiver operating characteristic (ROC) for binary classification task by accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.roc_curve .

Parameters
  • output_transform (callable, optional) – a callable that is used to transform the Engine’s process_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.

  • check_compute_fn (bool) –

    Default False. If True, sklearn.metrics.roc_curve is run on the first batch of data to ensure there are no issues. User will be warned in case there are any issues computing the function.

RocCurve expects y to be comprised of 0’s and 1’s. y_pred must either be probability estimates or confidence values. To apply an activation to y_pred, use output_transform as shown below:

def activated_output_transform(output):
    y_pred, y = output
    y_pred = torch.sigmoid(y_pred)
    return y_pred, y

roc_auc = RocCurve(activated_output_transform)

Regression metrics#

Module ignite.contrib.metrics.regression provides implementations of metrics useful for regression tasks. Definitions of metrics are based on Botchkarev 2018, page 30 “Appendix 2. Metrics mathematical definitions”.

Complete list of metrics:

CanberraMetric

Calculates the Canberra Metric.

FractionalAbsoluteError

Calculates the Fractional Absolute Error.

FractionalBias

Calculates the Fractional Bias.

GeometricMeanAbsoluteError

Calculates the Geometric Mean Absolute Error.

GeometricMeanRelativeAbsoluteError

Calculates the Geometric Mean Relative Absolute Error.

ManhattanDistance

Calculates the Manhattan Distance.

MaximumAbsoluteError

Calculates the Maximum Absolute Error.

MeanAbsoluteRelativeError

Calculate Mean Absolute Relative Error.

MeanError

Calculates the Mean Error.

MeanNormalizedBias

Calculates the Mean Normalized Bias.

MedianAbsoluteError

Calculates the Median Absolute Error.

MedianAbsolutePercentageError

Calculates the Median Absolute Percentage Error.

MedianRelativeAbsoluteError

Calculates the Median Relative Absolute Error.

R2Score

Calculates the R-Squared, the coefficient of determination.

WaveHedgesDistance

Calculates the Wave Hedges Distance.

class ignite.contrib.metrics.regression.CanberraMetric(output_transform=<function CanberraMetric.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Canberra Metric.

CM=j=1nAjPjAj+Pj\text{CM} = \sum_{j=1}^n\frac{|A_j - P_j|}{|A_j| + |P_j|}

where, AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018 or scikit-learn distance metrics

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Changed in version 0.4.3:

  • Fixed implementation: abs in denominator.

  • Works with DDP.

Parameters
class ignite.contrib.metrics.regression.FractionalAbsoluteError(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Fractional Absolute Error.

FAE=1nj=1n2AjPjAj+Pj\text{FAE} = \frac{1}{n}\sum_{j=1}^n\frac{2 |A_j - P_j|}{|A_j| + |P_j|}

where, AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters
class ignite.contrib.metrics.regression.FractionalBias(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Fractional Bias.

FB=1nj=1n2(AjPj)Aj+Pj\text{FB} = \frac{1}{n}\sum_{j=1}^n\frac{2 (A_j - P_j)}{A_j + P_j}

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters
class ignite.contrib.metrics.regression.GeometricMeanAbsoluteError(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Geometric Mean Absolute Error.

GMAE=exp(1nj=1nln(AjPj))\text{GMAE} = \exp(\frac{1}{n}\sum_{j=1}^n\ln(|A_j - P_j|))

where, AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters
class ignite.contrib.metrics.regression.GeometricMeanRelativeAbsoluteError(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Geometric Mean Relative Absolute Error.

GMRAE=exp(1nj=1nlnAjPjAjAˉ)\text{GMRAE} = \exp(\frac{1}{n}\sum_{j=1}^n \ln\frac{|A_j - P_j|}{|A_j - \bar{A}|})

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters
class ignite.contrib.metrics.regression.ManhattanDistance(output_transform=<function ManhattanDistance.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Manhattan Distance.

MD=j=1nAjPj\text{MD} = \sum_{j=1}^n |A_j - P_j|

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in scikit-learn distance metrics.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Changed in version 0.4.3:

  • Fixed sklearn compatibility.

  • Workes with DDP.

Parameters
class ignite.contrib.metrics.regression.MaximumAbsoluteError(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Maximum Absolute Error.

MaxAE=maxj=1,n(AjPj)\text{MaxAE} = \max_{j=1,n} \left( \lvert A_j-P_j \rvert \right)

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters
class ignite.contrib.metrics.regression.MeanAbsoluteRelativeError(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#

Calculate Mean Absolute Relative Error.

MARE=1nj=1nAjPjAj\text{MARE} = \frac{1}{n}\sum_{j=1}^n\frac{\left|A_j-P_j\right|}{\left|A_j\right|}

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in the reference Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters
class ignite.contrib.metrics.regression.MeanError(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Mean Error.

ME=1nj=1n(AjPj)\text{ME} = \frac{1}{n}\sum_{j=1}^n (A_j - P_j)

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in the reference Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters
class ignite.contrib.metrics.regression.MeanNormalizedBias(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Mean Normalized Bias.

MNB=1nj=1nAjPjAj\text{MNB} = \frac{1}{n}\sum_{j=1}^n\frac{A_j - P_j}{A_j}

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in the reference Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters
class ignite.contrib.metrics.regression.MedianAbsoluteError(output_transform=<function MedianAbsoluteError.<lambda>>)[source]#

Calculates the Median Absolute Error.

MdAE=MDj=1,n(AjPj)\text{MdAE} = \text{MD}_{j=1,n} \left( |A_j - P_j| \right)

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1) and of type float32.

Warning

Current implementation stores all input data (output and target) in as tensors before computing a metric. This can potentially lead to a memory error if the input data is larger than available RAM.

Parameters

output_transform (Callable) –

class ignite.contrib.metrics.regression.MedianAbsolutePercentageError(output_transform=<function MedianAbsolutePercentageError.<lambda>>)[source]#

Calculates the Median Absolute Percentage Error.

MdAPE=100MDj=1,n(AjPjAj)\text{MdAPE} = 100 \cdot \text{MD}_{j=1,n} \left( \frac{|A_j - P_j|}{|A_j|} \right)

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1) and of type float32.

Warning

Current implementation stores all input data (output and target) in as tensors before computing a metric. This can potentially lead to a memory error if the input data is larger than available RAM.

Parameters

output_transform (Callable) –

class ignite.contrib.metrics.regression.MedianRelativeAbsoluteError(output_transform=<function MedianRelativeAbsoluteError.<lambda>>)[source]#

Calculates the Median Relative Absolute Error.

MdRAE=MDj=1,n(AjPjAjAˉ)\text{MdRAE} = \text{MD}_{j=1,n} \left( \frac{|A_j - P_j|}{|A_j - \bar{A}|} \right)

where AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1) and of type float32.

Warning

Current implementation stores all input data (output and target) in as tensors before computing a metric. This can potentially lead to a memory error if the input data is larger than available RAM.

Parameters

output_transform (Callable) –

class ignite.contrib.metrics.regression.R2Score(output_transform=<function R2Score.<lambda>>, device=device(type='cpu'))[source]#

Calculates the R-Squared, the coefficient of determination.

R2=1j=1n(AjPj)2j=1n(AjAˉ)2R^2 = 1 - \frac{\sum_{j=1}^n(A_j - P_j)^2}{\sum_{j=1}^n(A_j - \bar{A})^2}

where AjA_j is the ground truth, PjP_j is the predicted value and Aˉ\bar{A} is the mean of the ground truth.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1) and of type float32.

Changed in version 0.4.3: Works with DDP.

Parameters
class ignite.contrib.metrics.regression.WaveHedgesDistance(output_transform=<function Metric.<lambda>>, device=device(type='cpu'))[source]#

Calculates the Wave Hedges Distance.

WHD=j=1nAjPjmax(Aj,Pj)\text{WHD} = \sum_{j=1}^n\frac{|A_j - P_j|}{max(A_j, P_j)}

where, AjA_j is the ground truth and PjP_j is the predicted value.

More details can be found in Botchkarev 2018.

  • update must receive output of the form (y_pred, y) or {'y_pred': y_pred, 'y': y}.

  • y and y_pred must be of same shape (N, ) or (N, 1).

Parameters