Shortcuts

Functional Metrics

Aggregation Metrics

auc

Computes Area Under the Curve (AUC) using the trapezoidal rule.

mean

Compute weighted mean.

sum

Compute weighted sum.

throughput

Calculate the throughput value which is the number of elements processed per second.

Classification Metrics

binary_accuracy

Compute binary accuracy score, which is the frequency of input matching target.

binary_auprc

Compute AUPRC, also called Average Precision, which is the area under the Precision-Recall Curve, for binary classification.

binary_auroc

Compute AUROC, which is the area under the ROC Curve, for binary classification.

binary_binned_auroc

Compute AUROC, which is the area under the ROC Curve, for binary classification.

binary_binned_precision_recall_curve

Compute precision recall curve with given thresholds.

binary_confusion_matrix

Compute binary confusion matrix, a 2 by 2 tensor with counts ( (true positive, false negative) , (false positive, true negative) )

binary_f1_score

Compute binary f1 score, the harmonic mean of precision and recall.

binary_normalized_entropy

Compute the normalized binary cross entropy between predicted input and ground-truth binary target.

binary_precision

Compute precision score for binary classification class, which is calculated as the ratio between the number of true positives (TP) and the total number of predicted positives (TP + FP).

binary_precision_recall_curve

Returns precision-recall pairs and their corresponding thresholds for binary classification tasks.

binary_recall

Compute recall score for binary classification class, which is calculated as the ratio between the number of true positives (TP) and the total number of actual positives (TP + FN).

binary_recall_at_fixed_precision

Returns the highest possible recall value given the minimum precision for binary classification tasks.

multiclass_accuracy

Compute accuracy score, which is the frequency of input matching target.

multiclass_auprc

Compute AUPRC, also called Average Precision, which is the area under the Precision-Recall Curve, for multiclass classification.

multiclass_auroc

Compute AUROC, which is the area under the ROC Curve, for multiclass classification.

multiclass_binned_auroc

Compute AUROC, which is the area under the ROC Curve, for multiclass classification.

multiclass_binned_precision_recall_curve

Compute precision recall curve with given thresholds.

multiclass_confusion_matrix

Compute multi-class confusion matrix, a matrix of dimension num_classes x num_classes where each element at position (i,j) is the number of examples with true class i that were predicted to be class j.

multiclass_f1_score

Compute f1 score, which is defined as the harmonic mean of precision and recall.

multiclass_precision

Compute precision score, which is the ratio of the true positives (TP) and the total number of points classified as positives (TP + FP).

multiclass_precision_recall_curve

Returns precision-recall pairs and their corresponding thresholds for multi-class classification tasks.

multiclass_recall

Compute recall score, which is calculated as the ratio between the number of true positives (TP) and the total number of actual positives (TP + FN).

multilabel_accuracy

Compute multilabel accuracy score, which is the frequency of input matching target.

multilabel_auprc

Compute AUPRC, also called Average Precision, which is the area under the Precision-Recall Curve, for multilabel classification.

multilabel_precision_recall_curve

Returns precision-recall pairs and their corresponding thresholds for multi-label classification tasks.

multilabel_recall_at_fixed_precision

Returns the highest possible recall value give the minimum precision for each label and their corresponding thresholds for multi-label classification tasks.

topk_multilabel_accuracy

Compute multilabel accuracy score, which is the frequency of the top k label predicted matching target.

Ranking Metrics

click_through_rate

Compute the click through rate given a click events.

frequency_at_k

Calculate the frequency given a list of frequencies and threshold k.

hit_rate

Compute the hit rate of the correct class among the top predicted classes.

num_collisions

Compute the number of collisions given a list of input(ids).

reciprocal_rank

Compute the reciprocal rank of the correct class among the top predicted classes.

weighted_calibration

Compute weighted calibration metric.

Regression Metrics

mean_squared_error

Compute Mean Squared Error, which is the mean of squared error of input and target Its class version is torcheval.metrics.MeanSquaredError.

r2_score

Compute R-squared score, which is the proportion of variance in the dependent variable that can be explained by the independent variable.

Text Metrics

bleu_score

Compute BLEU score given translations and references for each translation.

perplexity

Perplexity measures how well a model predicts sample data.

word_error_rate

Compute the word error rate of the predicted word sequence(s) with the reference word sequence(s).

word_information_preserved

Compute the word information preserved score of the predicted word sequence(s) against the reference word sequence(s).

word_information_lost

Word Information Lost rate is a metric of the performance of an automatic speech recognition system.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources