Performance measurement is essential for machine learning activities. ROC or Area Under Curve/AUC helps us address the problems we face during classification. When checking or visualizing how different classifications of a model are performing, we use these metrics or curves to evaluate the outcome. ROC is short for Receiver Operating Characteristics, and AUC is the Area Under the Curve. We can also write this term as AUROC or Area Under the Receiver Operating Characteristics.

Area Under Curve (AUC)

AUC helps in comparing different classifiers. You can summarize how each classifier performs in a single measure. The basic approach to find the AUC is to calculate the AUROC. It is similar to the probability that the random negative instance is lower than the positive instance. If a classifier has a lower AUC than another classifier, it normally means that score of the high AUC is not good. However, AUC works well under the general measurement of predictive accuracy.

Some Important Terminology

The Confusion Matrix

When the output classes are more than one, you can use confusion measurement to solve machine learning classification problems. The table of confusion matrix includes four different combinations of actual and predicted values. You can use this technique to measure specificity, precision, recall, accuracy, and the topic we discuss today, AUC and ROC curve. Let’s understand the terms that the confusion matrix contains using the example of pregnancy:

True Positive

The interpretation of true positive is that you predict the positive, and it is a true statement. For instance, a woman is pregnant, and you predict the same.

True Negative

The interpretation of true negative is that you predict the negative, and it is a true statement. For instance, a man is not pregnant, and you predict the same.

False Positive

The false-positive interpretation is that you predict the positive, and it is not a true statement. For instance, a man is not pregnant, but you predict that he is pregnant. This prediction is a Type 1 error.

False Negative

The true negative interpretation is that you predict the negative, and it is not a true statement. For instance, a woman is not pregnant, but you predict that she is pregnant. This prediction is a Type 2 error.

You should remember that the actual values are true and false, and the values you predict are positive and negative.

Sensitivity and Specificity

Various domains measure sensitivity and specificity collectively. However, these are separate measures. You can use sensitivity and specificity to predict the performance of the model’s classification. Furthermore, you can also use these measures to perform a diagnostic test.

For instance, if we want to measure how effective a diagnostic test is in a medical condition:

Sensitivity will measure how many people are suffering from the disease or how many are positive.

Specificity will measure how many people are not suffering from the disease or how many are negative.

Logistic Regression

Logistic regression is an algorithm that you can use in machine learning for classification problems. This algorithm is predictive analysis, probability, and its concept. You can also call this algorithm, linear regression model. However, linear regression has a complex cost function in comparison to logistic regression. You can define the cost function of logistic regression as the sigmoid function or logistic function.

The hypothesis in this algorithm will limit the cost function from zero to one. However, the linear function represents it to be greater than one or less than zero. This condition is impossible against logistic regression’s hypothesis.

Understanding the Concept

Creating a ROC Curve

You can construct a ROC curve by placing the TPR or true positive rate and FPR or false positive rate against each other. The true positive rate is the observations that you predict correctly as positive from all positive observations. The mathematical representation is:

TP/(TP + FN)

Similarly, the false positive rate is the observations that you predict incorrectly as positive from all negative observations. The mathematical representation is:

FP/(TN + FP)

For instance, while performing a medical test for a disease, the rate at which you identify people correctly for the positive results is the true positive rate.

You can only get a single point on the ROC space if the classifier only returns your prediction class. However, when the classifiers are problematic and have a score or probability that belongs to one class instead of others, you can create a curve with a varying score threshold. You can convert various discrete classifiers into scoring classifiers by finding their statistics. For instance, you can find the class of the node of a leaf through fractions of the nodes.

Interpreting the ROC curve

The ROC curve signifies the adjustment among the FPR (specificity) and TRP (sensitivity). The classifier in the top-left corner specifies that the performance is better. As a standard, you will receive points from a random classifier between the diagonal.

FPR=TRP

You can say that the test is less accurate if the curve is closer to the 45-degree of the ROC space.

ROC is not dependent on class distribution. That is why you can evaluate the rare events with predictive classifiers such as a disaster or a disease. In comparison to that, the following equation’s accuracy will help classifiers who mostly predict negative results for a rare condition or event.

(TP +TN)/(TP + TN + FN + FP)

Conclusion

You can use the ROC curve for machine learning and other sectors and industries to find the rare conditions that do not have proper classifiers. You need to predict the classifiers and identify if the curve is along the 45 degrees of the ROC area. Medical sectors have found effective results and outcomes of rare diseases.