📚 Understanding Classification Outcomes
What do these values mean?
- True Positives (TP): Correctly predicted positive cases
- True Negatives (TN): Correctly predicted negative cases
- False Positives (FP): Incorrectly predicted positive (Type I error)
- False Negatives (FN): Incorrectly predicted negative (Type II error)
Key Metrics Formulas
Accuracy = (TP + TN) / (TP + TN + FP + FN)
Overall correctness of the model
Precision = TP / (TP + FP)
How many predicted positives were actually positive
Recall = TP / (TP + FN)
How many actual positives were correctly identified
F1 Score = 2 × (Precision × Recall) / (Precision + Recall)
Harmonic mean of precision and recall