Shortcuts

# Metrics¶

## Aggregation Metrics¶

 AUC Computes Area Under the Curve (AUC) using the trapezoidal rule. Cat Concatenate all input tensors along dimension dim. Max Calculate the maximum value of all elements in all the input tensors. Mean Calculate the weighted mean value of all elements in all the input tensors. Min Calculate the minimum value of all elements in all the input tensors. Sum Calculate the weighted sum value of all elements in all the input tensors. Throughput Calculate the throughput value which is the number of elements processed per second.

## Classification Metrics¶

 BinaryAccuracy Compute binary accuracy score, which is the frequency of input matching target. BinaryAUPRC Compute AUPRC, also called Average Precision, which is the area under the Precision-Recall Curve, for binary classification. BinaryAUROC Compute AUROC, which is the area under the ROC Curve, for binary classification. BinaryBinnedAUROC Compute AUROC, which is the area under the ROC Curve, for binary classification. BinaryBinnedPrecisionRecallCurve Compute precision recall curve with given thresholds. BinaryConfusionMatrix Compute binary confusion matrix, a 2 by 2 tensor with counts ( (true positive, false negative) , (false positive, true negative) ) BinaryF1Score Compute binary f1 score, which is defined as the harmonic mean of precision and recall. BinaryNormalizedEntropy Compute the normalized binary cross entropy between predicted input and ground-truth binary target. BinaryPrecision Compute the precision score for binary classification tasks, which is calculated as the ratio of the true positives and the sum of true positives and false positives. BinaryPrecisionRecallCurve Returns precision-recall pairs and their corresponding thresholds for binary classification tasks. BinaryRecall Compute the recall score for binary classification tasks, which is calculated as the ratio of the true positives and the sum of true positives and false negatives. BinaryRecallAtFixedPrecision Returns the highest possible recall value give the minimum precision for binary classification tasks. MulticlassAccuracy Compute accuracy score, which is the frequency of input matching target. MulticlassAUPRC Compute AUPRC, also called Average Precision, which is the area under the Precision-Recall Curve, for multiclass classification. MulticlassAUROC Compute AUROC, which is the area under the ROC Curve, for multiclass classification in a one vs rest fashion. MulticlassBinnedAUROC Compute AUROC, which is the area under the ROC Curve, for multiclass classification. MulticlassBinnedPrecisionRecallCurve Compute precision recall curve with given thresholds. MulticlassConfusionMatrix Compute multi-class confusion matrix, a matrix of dimension num_classes x num_classes where each element at position (i,j) is the number of examples with true class i that were predicted to be class j. MulticlassF1Score Compute f1 score, which is defined as the harmonic mean of precision and recall. MulticlassPrecision Compute the precision score, the ratio of the true positives and the sum of true positives and false positives. MulticlassPrecisionRecallCurve Returns precision-recall pairs and their corresponding thresholds for multi-class classification tasks. MulticlassRecall Compute the recall score, the ratio of the true positives and the sum of true positives and false negatives. MultilabelAccuracy Compute multilabel accuracy score, which is the frequency of input matching target. MultilabelAUPRC Compute AUPRC, also called Average Precision, which is the area under the Precision-Recall Curve, for multilabel classification. MultilabelPrecisionRecallCurve Returns precision-recall pairs and their corresponding thresholds for multi-label classification tasks. MultilabelRecallAtFixedPrecision Returns the highest possible recall value given the minimum precision for each label and their corresponding thresholds for multi-label classification tasks. TopKMultilabelAccuracy Compute multilabel accuracy score, which is the frequency of the top k label predicted matching target.

## Ranking Metrics¶

 ClickThroughRate Compute the click through rate given click events. HitRate Compute the hit rate of the correct class among the top predicted classes. ReciprocalRank Compute the reciprocal rank of the correct class among the top predicted classes. WeightedCalibration Compute weighted calibration metric.

## Regression Metrics¶

 MeanSquaredError Compute Mean Squared Error, which is the mean of squared error of input and target. R2Score Compute R-squared score, which is the proportion of variance in the dependent variable that can be explained by the independent variable.

## Text Metrics¶

 BLEUScore Compute BLEU score (https://en.wikipedia.org/wiki/BLEU) given translations and references. Perplexity Perplexity measures how well a model predicts sample data. WordErrorRate Compute the word error rate of the predicted word sequence(s) with the reference word sequence(s). WordInformationLost Word Information Lost (WIL) is a metric of the performance of an automatic speech recognition system. WordInformationPreserved Compute the word information preserved of the predicted word sequence(s) with the reference word sequence(s).

## Windowed Metrics¶

 WindowedBinaryAUROC The windowed version of BinaryAUROC. WindowedBinaryNormalizedEntropy The windowed version of BinaryNormalizedEntropy that provides both windowed and liftime values. WindowedClickThroughRate The windowed version of ClickThroughRate that provides both windowed and lifetime values. WindowedMeanSquaredError The windowed version of Mean Squared Error that provides both windowed and liftime values. WindowedWeightedCalibration Compute weighted calibration metric.

## Docs

Access comprehensive developer documentation for PyTorch

View Docs

## Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials