🎯 ML Classification Outcomes Analyzer

Interactive Confusion Matrix Pie Chart

About this tool: Visualize the four fundamental outcomes of binary classification. Adjust the values to see how different confusion matrix configurations affect your model's performance metrics.

87.5% Accuracy
89.5% Precision
85.0% Recall
87.2% F1 Score

📚 Understanding Classification Outcomes

What do these values mean?

  • True Positives (TP): Correctly predicted positive cases
  • True Negatives (TN): Correctly predicted negative cases
  • False Positives (FP): Incorrectly predicted positive (Type I error)
  • False Negatives (FN): Incorrectly predicted negative (Type II error)

Key Metrics Formulas

Accuracy = (TP + TN) / (TP + TN + FP + FN)
Overall correctness of the model
Precision = TP / (TP + FP)
How many predicted positives were actually positive
Recall = TP / (TP + FN)
How many actual positives were correctly identified
F1 Score = 2 × (Precision × Recall) / (Precision + Recall)
Harmonic mean of precision and recall