Would you think someone who claimed to make a model entirely in their head to spot terrorists trying to board flights with greater than 99% accuracy? Well, here is that the model: simply label every single person flying from a US airport as not a terrorist. Given the 800 million average passengers on US flights per annum and therefore the 19 (confirmed) terrorists who boarded US flights from 2000–2017, this model achieves an astounding accuracy of 99.9999999%! which may sound impressive, but I even have a suspicion the US Department of Homeland Security won’t be calling anytime soon to shop for this model. While this solution has nearly-perfect accuracy, this problem is one during which accuracy is clearly not an adequate metric!

The terrorist detection task is an imbalanced classification problem: we’ve two classes we’d like to spot — terrorists and not terrorists — with one category representing the overwhelming majority of the info points. Another imbalanced classification problem occurs in disease detection when the speed of the disease within the public is extremely low. In both these cases the positive class — disease or terrorist — is greatly outnumbered by the negative class. These sorts of problems are samples of the fairly common case in data science when accuracy isn’t an honest measure for assessing model performance.

Intuitively, we all know that proclaiming all data points as negative within the terrorist detection problem isn’t helpful and, instead, we should always specialise in identifying the positive cases. The metric our intuition tells us we should always maximize is understood in statistics as recall, or the power of a model to seek out all the relevant cases within a dataset. The precise definition of recall is that the number of true positives divided by the amount of true positives plus the amount of false negatives. True positives are datum classified as positive by the model that really are positive (meaning they’re correct), and false negatives are data points the model identifies as negative that really are positive (incorrect). within the terrorism case, true positives are correctly identified terrorists, and false negatives would be individuals the model labels as not terrorists that really were terrorists. Recall are often thought as of a model’s ability to seek out all the info points of interest during a dataset.

https://miro.medium.com/max/1533/1*gscG4JdjnyU5QkqNDqBg_w.png

You might notice something about this equation: if we label all individuals as terrorists, then our recall goes to 1.0! we’ve an ideal classifier right? Well, not exactly. like most concepts in data science, there’s a trade-off within the metrics we elect to maximise . within the case of recall, once we increase the recall, we decrease the precision. Again, we intuitively know that a model that labels 100% of passengers as terrorists is perhaps not useful because we might then need to ban every single person from flying. Statistics provides us with the vocabulary to precise our intuition: this new model would suffer from low precision, or the power of a classification model to spot only the relevant data points.

Precision is defined because the number of true positives divided by the amount of true positives plus the amount of false positives. False positives are cases the model incorrectly labels as positive that are literally negative, or in our example, individuals the model classifies as terrorists that aren’t . While recall expresses the power to seek out all relevant instances during a dataset, precision expresses the proportion of the info points our model says was relevant actually were relevant.

https://miro.medium.com/max/1577/1*FKXzF6DYSP2mV4HUBftRgg.png

Now, we will see that our first model which labeled all individuals as not terrorists wasn’t very useful. Although it had near-perfect accuracy, it had 0 precision and 0 recall because there have been no true positives! Say we modify the model slightly, and identify one individual correctly as a terrorist. Now, our precision are going to be 1.0 (no false positives) but our recall are going to be very low because we’ll still have many false negatives. If we attend the opposite extreme and classify all passengers as terrorists, we’ll have a recall of 1.0 — we’ll catch every terrorist — but our precision are going to be very low and we’ll detain many innocent individuals. In other words, as we increase precision we decrease recall and vice-versa.

https://miro.medium.com/max/810/0*XEO3pwAee7tBT_D1.png

Combining Precision and Recall

In some situations, we’d know that we would like to maximise either recall or precision at the expense of the opposite metric. for instance , in preliminary disease screening of patients for follow-up examinations, we might probably need a recall near 1.0 — we would like to seek out all patients who even have the disease — and that we can accept a coffee precision if the value of the follow-up examination isn’t significant. However, in cases where we would like to seek out an optimal blend of precision and recall we will combine the 2 metrics using what’s called the F1 score.

The F1 score is that the mean of precision and recall taking both metrics under consideration within the following equation:

https://miro.medium.com/max/338/1*UJxVqLnbSj42eRhasKeLOA.png

We use the mean rather than an easy average because it punishes extreme values. A classifier with a precision of 1.0 and a recall of 0.0 features a simple average of 0.5 but an F1 score of 0. The F1 score gives equal weight to both measures and may be a specific example of the overall Fβ metric where β are often adjusted to offer more weight to either recall or precision. (There are other metrics for combining precision and recall, like the mean of precision and recall, but the F1 score is that the most ordinarily used.) If we would like to make a balanced classification model with the optimal balance of recall and precision, then we attempt to maximize the F1 score.

Visualizing Precision and Recall

I’ve thrown a few new terms at you and we’ll rehearse an example to point out how they’re utilized in practice. Before we will get there though we’d like to briefly mention tw concepts used for showing precision and recall.

First up is that the confusion matrix which is beneficial for quickly calculating precision and recall given the anticipated labels from a model. A confusion matrix for binary classification shows the four different outcomes: true positive, false positive, true negative, and false negative. the particular values form the columns, and therefore the predicted values (labels) form the rows. The intersection of the rows and columns show one among the four outcomes. for instance , if we predict a knowledge point is positive, but it actually is negative, this is often a false positive.

https://miro.medium.com/max/717/1*CPnO_bcdbE8FXTejQiV2dg.png

Going from the confusion matrix to the recall and precision requires finding the respective values within the matrix and applying the equations:

https://miro.medium.com/max/1219/1*6NkN_LINs2erxgVJ9rkpUA.png

The other main visualization technique for showing the performance of a classification model is that the Receiver Operating Characteristic (ROC) curve. Don’t let the complicated name scare you off! the thought is comparatively simple: the ROC curve shows how the recall vs precision relationship changes as we vary the edge for identifying a positive in our model. the edge represents the worth above which a knowledge point is taken into account within the positive class. If we’ve a model for identifying a disease, our model might output a score for every patient between 0 and 1 and that we can set a threshold during this range for labeling a patient as having the disease (a positive label). By altering the edge , we will attempt to achieve the proper precision vs recall balance.

An ROC curve plots truth positive rate on the y-axis versus the false positive rate on the x-axis. truth positive rate (TPR) is that the recall and therefore the false positive rate (FPR) is that the probability of a warning . Both of those are often calculated from the confusion matrix:

https://miro.medium.com/max/1082/1*Uh9YUp632ktSd75bZDeB0Q.png
https://miro.medium.com/max/443/0*2iHR8dFXev5GWo_f.png

The black diagonal line indicates a random classifier and therefore the red and blue curves show two different classification models. For a given model, we will only stay one curve, but we will move along the curve by adjusting our threshold for classifying a positive case. Generally, as we decrease the edge , we move to the proper and upwards along the curve. With a threshold of 1.0, we might be within the lower left of the graph because we identify no data points as positives resulting in no true positives and no false positives (TPR = FPR = 0). As we decrease the edge , we identify more data points as positive, resulting in more true positives, but also more false positives (the TPR and FPR increase). Eventually, at a threshold of 0.0 we identify all data points as positive and find ourselves within the upper right corner of the ROC curve (TPR = FPR = 1.0).

Finally, we will quantify a model’s ROC curve by calculating the entire Area Under the Curve (AUC), a metric which falls between 0 and 1 with a better number indicating better classification performance. within the graph above, the AUC for the blue curve are going to be greater than that for the red curve, meaning the blue model is best at achieving a mix of precision and recall. A random classifier (the black line) achieves an AUC of 0.5.

Recap

We’ve covered a couple of terms, none of which are difficult on their own, but which combined are often a touch overwhelming! Let’s do a fast recap then rehearse an example to solidly the new ideas we learned.

Four Outcomes of Binary Classification

True positives: data points labeled as positive that are literally positive

False positives: data points labeled as positive that are literally negative

True negatives: data points labeled as negative that are literally negative

False negatives: data points labeled as negative that are literally positive

Recall and Precision Metrics

Recall: ability of a classification model to spot all relevant instances

Precision: ability of a classification model to return only relevant instances

F1 score: single metric that mixes recall and precision using the mean

Visualizing Recall and Precision

Confusion matrix: shows the particular and predicted labels from a classification problem

Receiver operating characteristic (ROC) curve: plots truth positive rate (TPR) versus the false positive rate (FPR) as a function of the model’s threshold for classifying a positive

Area under the curve (AUC): metric to calculate the general performance of a classification model supported area under the ROC curve

Example Application

https://miro.medium.com/max/1016/1*3SjX3LaLUfJ3Yf7xU1QhmA.png

Our task are going to be to diagnose 100 patients with a disease present in 50% of the overall population. we’ll assume a recorder model, where we put in information about patients and receive a score between 0 and 1. we will alter the edge for labeling a patient as positive (has the disease) to maximise the classifier performance. we’ll evaluate thresholds from 0.0 to 1.0 in increments of 0.1, at each step calculating the precision, recall, F1, and site on the ROC curve. Following are the classification outcomes at each threshold:

We’ll do one sample calculation of the recall, precision, true positive rate, and false positive rate at athreshold of 0.5. First we make the confusion matrix:

https://miro.medium.com/max/809/1*gdLd_wAhNfJKH49NSfq72Q.png

We can use the numbers in the matrix to calculate the recall, precision, and F1 score:

https://miro.medium.com/max/1441/1*XQOYd2mheHyVVR9H-ENtIQ.png

Then we calculate the true positive and false positive rate to find the y and x coordinates for the ROC curve.

https://miro.medium.com/max/1360/1*HzWxvbikCtiB-QtFb48WoQ.png

To make the whole ROC curve, we feature out this process at each threshold. As you would possibly think, this is often pretty tedious, so rather than doing it by hand, we use a language like Python to try to to it for us! The Jupyter Notebook with the calculations is on GitHub for anyone to ascertain the implementation. the ultimate ROC curve is shown below with the thresholds above the points.

https://miro.medium.com/max/555/1*ZAH33g5FD9xYZRadgmqWVw.png

Here we will see all the concepts come together! At a threshold of 1.0, we classify no patients as having the disease and hence have a recall and precision of 0.0. because the threshold decreases, the recall increases because we identify more patients that have the disease. However, as our recall increases, our precision decreases because additionally to increasing truth positives, we increase the false positives. At a threshold of 0.0, our recall is ideal — we discover all patients with the disease — but our precision is low because we’ve many false positives. we will move along the curve for a given model by changing the edge and choose the edge that maximizes the F1 score. To shift the whole curve, we might got to build a special model.

Final model statistics at each threshold are below:

https://miro.medium.com/max/529/1*TESjAFBurN7RVXyb5KDOxg.png

Based on the F1 score, the general best model occurs at a threshold of 0.5. If we wanted to stress precision or recall to a greater extent, we could choose the corresponding model that performs best on those measures.