While Precision focuses on the accuracy of the positive predictions made by the model, Recall, also known as Sensitivity or the True Positive Rate (TPR), tackles a different question: Out of all the actual positive instances that exist, how many did the model correctly identify?
Think about it like this: if there's something important you need to find (like identifying patients with a specific condition, or detecting faulty items on an assembly line), Recall measures how successful your model is at finding all of them. It measures the model's "completeness" in identifying positive cases.
Recall is calculated using the True Positives (TP) and False Negatives (FN) from the confusion matrix. Remember:
The formula for Recall is:
Recall=TP+FNTPThe denominator, TP+FN, represents the total number of actual positive instances in the dataset (those correctly identified plus those missed). Recall, therefore, gives you the proportion of these actual positives that your model successfully "recalled" or identified.
Recall values range from 0 to 1 (or 0% to 100%).
Prioritizing Recall is significant in situations where failing to identify a positive case (a False Negative) has serious consequences. Consider these examples:
It's important to understand that Recall focuses on finding all the actual positives, whereas Precision focuses on ensuring that the instances the model predicts as positive are indeed positive. Often, increasing Recall can lead to a decrease in Precision, and vice versa. If you try to catch every possible positive instance (high Recall), you might end up incorrectly labeling more negative instances as positive (lowering Precision). This relationship is known as the Precision-Recall trade-off, which we'll discuss next.
In summary, Recall measures how effectively a model identifies all relevant instances within a dataset. It answers the question: "Of all the things we should have found, how many did we actually find?" It's a critical metric when the cost of missing a positive instance is high.
© 2025 ApX Machine Learning