# Evaluation Metrics II

This lesson discusses the AUC-ROC curve and provides an example.

We'll cover the following

## 3. AUC-ROC curve

AUC (area under the curve) - ROC (receiver operating characteristics) curve is a performance measurement for a classification model at various classification threshold settings. Basically, it is a probability curve that tells us how well the model is capable of distinguishing between classes. The higher the AUC value of our probability curve, the better the model is at predicting zeros as zeros and ones as ones.

What do we mean by various threshold settings?
Say we set the threshold to 0.9. This means that if for any given sample our trained model predicts a value higher than 0.9, our output class will be predicted as a positive class. Otherwise, it will be placed in the negative class.

The ROC curve is plotted with the true positive rate (TPR) (Recall/Sensitivity) against the false positive rate (FPR) (FPR, 1 - Specificity), where TPR is on the y-axis and FPR is on the x-axis, and:

• Sensitivity, Recall, Hit Rate, or True Positive Rate

$\Large&space;\mathrm{Sensitivity = TPR} = {\frac {\mathrm {TP} }{\mathrm {P} }}={\frac {\mathrm {TP} }{\mathrm {TP} +\mathrm {FN} }}$
• Fall-out (1 - Specificity), or False Positive Rate

$\Large&space;\mathrm {1 - Specificity = FPR} ={\frac {\mathrm {FP} }{\mathrm {N} }}={\frac {\mathrm {FP} }{\mathrm {FP} +\mathrm {TN} }}$

A great model has an AUC close to one, indicating it has an excellent measure of separability. On the other hand, a poor model has an AUC near zero, meaning it is predicting zeros as ones and ones as zeros. When AUC is 0.5, it means the model has no class separation capacity whatsoever, and it’s essentially making random predictions.

Let’s understand this better via an example analysis taken from a medical research journal:

Get hands-on with 1000+ tech skills courses.