All Collections
Evaluating a Model
Binary models
Evaluation metrics in binary models
Evaluation metrics in binary models

Binary classification models use confusion matrix to calculate precision & recall rates, determining model effectiveness & accuracy.

Ori Sagi avatar
Written by Ori Sagi
Updated over a week ago

In a binary classification model, the objective of the model is to classify the entities into 2 groups. For example, who will “churn” (1) vs. who will “not churn” (0).

When comparing the predictions that were made by the model to the actual outcomes, we can create four groups:

  1. True positive (TP) - predicted as “1”, and is really “1”.

  2. True negative (TN) - predicted as “0”, and is really “0”.

  3. False positive (FP) - predicted as “1”, but is actually “0”.

  4. False negative (FN) - predicted as “0”, but is actually “1”.

You are able to see the groups under “Advanced Performance details” in your model’s dashboard:

These four groups form are known as the “confusion matrix” of a model.

We use the confusion matrix to calculate metrics such as Precision and Recall, which can help in determining the effectiveness of the model.

Precision Rate

The precision rate indicates how accurate a model was in predicting the positive class. In other words, it calculates the percentage of positive predictions that turned out to be correct:

How Precision is calculated

For example, if a model predicts that 1,000 people will become Churn Customers, but only 800 of them actually do so, then the model’s precision is 80%:

Recall Rate (detection)

The detection rate indicates a model’s ability to detect the positive class. In other words, it calculates the percentage of all positive entities that were correctly predicted.

How Recall is calculated

For example, if 1,600 people became Churn Customers, but the model predicted only 800 of them to do so, then the model’s recall rate is 50%.

Did this answer your question?