ROC and AUC Curves

ROC-AUC curves are a performance metric for classification problems of various threshold values. Precision and sensitivity (recall), which are the classification metrics mentioned in our article "Performance Metrics", are important issues that need to be informed in order to understand this issue well. Therefore, if you are not familiar with these terms, I recommend that you read our article "Performance Metrics" before continuing with this article.

Threshold Value(Threshold)

First of all, let's briefly talk about what the threshold value is. Threshold value is a criterion for how classification should be made according to probability values for classification problems. For example, if our threshold value is 0.5, our estimates of 0.5 and above will belong to class 1, and estimates below 0.5 will belong to class 0. If our threshold value is 0.3, values 0.3 and above will be classified as class 1 and values below will be classified as class 0. And what does this threshold value give us? By making changes to threshold values, we can change different metrics such as precision and sensitivity according to our own wishes and objectives.

Relationship Between Certainty and Sensitivity

Certainty and sensitivity have an inverse relationship. For different threshold values, one increases and the other decreases. As a reminder, the following image contains formulas of precision and sensitivity.

roc
Certainty and Sensitivity

Let's take an example to better understand this relationship. Our problem consists of two classes, patient class positive (1) and healthy class negative(0). For example, let our data be distributed as follows and we will have a logistical regression model.

roc
Data

. First, let's examine the situation where our threshold is 0.5. According to the predictions made by our model, our complexity matrix and precision-sensitivity values are as in the image below.

roc
Threshold Value = 0.5

Let's raise our threshold to 0.75 and observe what kind of change we're going to see. The result we will get when we create our matrix of complexity is as follows.

roc
Threshold Value = 0.75

As you can see, we see an increase in our certainty value, while the sensitivity remains constant. In this case, we are more selective when classifying as patients by increasing the threshold value, but people who are really sick can also escape.

Let's lower our threshold to 0.20 and examine the change. As shown below, this time we are seeing a decrease in the value of certainty and an increase in sensitivity. In this case, we do not lose sight of a sick person, but we make mistakes by diagnosing people who are not actually sick as patients.

roc
Threshold Value = 0.2

So how do we decide the best threshold value? If we try to try all the threshold values one by one, all the complexity matrices that need to be calculated become unspeakable. That's where the ROC curve kicks in.

ROC

There's an extra metric we need to learn before we move to the ROC curve, which is the false positive rate. The formula is the portion of false positive values as follows to the actual negative values.

roc
False Positive Rate

Now we can fill in our chart below for various threshold values and get our ROC curve.

roc
ROC Chart

Let's start by selecting our threshold value of 0. If the threshold is 0, we're doing all our predictions as 1. So all our estimates for our sample will belong to the patient class.

roc
ROC Threshold Value 1

Let's increase our threshold by one click and classify only the instance where the argument(x) is the least healthy. We get a result like the following. Since the false positive ratio of our new point is less than the previous one, our current threshold gives a better result than the previous one.

roc
ROC Threshold Value 2

Let's increase our threshold by one click and classify both instances with the least x values as healthy. According to the following conclusion, our new point remains to the left of our previous point, which indicates that our new threshold value is better than the previous one.

roc
ROC Threshold Value 3

Let's keep increasing our threshold values gradually. We get the following images, respectively.

roc
ROC Threshold Value 4
roc
ROC Threshold Value 5
roc
ROC Threshold Value 6
roc
ROC Threshold Value 7
roc
ROC Threshold Value 8

At the end of the threshold, we classify all our estimates as 0. As a result, we achieve 0% certainty and 0% false positive.

Finally, we achieve our ROC curve by combining all our points.

roc
ROC Curve

We review ROC curves so that they can select the appropriate threshold value. For example, our best threshold values in this curve are points 3 and 5. It is now up to us to make the choice between these points. According to our problem, we can prefer higher sensitivity or lower false positive rate.

AUC

AUC refers to the area below the ROC curve. It is valued between 0 and 1, and the closer it is to 1, the more successful the model is. The AUC value of the ROC curve we have created is 0.9.

roc
AUC

The AUC value makes it easier to compare the success of two models trained on the same data set. In the following example we can compare the success of different models on the same data set. You can see that the red model gives a much better performance than the blue model.

roc
AUC Values Comparison

Of course, we don't always do the extraction of these ROC and AUC values with our hands. You can examine how we can extract the ROC chart and obtain AUC values by reviewing the following code.

from sklearn.metrics import roc_curve,auc
import matplotlib.pyplot as plt

gercek_degerler = [1,1,0,1,0,0,0,1,0,1,1]
estimates = [0.98.0.43.0.3.0.55.0.35.0.2.0.1,0.60,0.03,0.85,0.35.0.58]
tahminler_2 = [0.63.0.54.0.4.0.7.0.4.0.0.0.51.0.4.0.3.0.8.0.4]

ypo, sensitivity, esik_degerleri = roc_curve(gercek_degerler,estimates)
auc_1 = auc(ypo,sensitivity)

ypo_2, duyarlilik_2, esik_degerleri_2 = roc_curve(gercek_degerler,tahminler_2)
auc_2 = auc(ypo_2,duyarlilik_2)

plt.plot(ypo,precision,label="Model 1 AUC = 0.3f" %auc_1)
plt.plot(ypo_2,duyarlilik_2,label="Model 2 AUC = 0.3f" %auc_2)

plt.xlabel("False Positive Ratio")
plt.ylabel("Sensitivity")

plt.legend(loc='lower right')

plt.show()

Although we used the false positive ratio when removing ROC curves in this article, precision can also be used in place of false positive rate in unstable data sets.