All Collections
Creating a Model
Advanced topics
Optimization metrics for binary models
Optimization metrics for binary models
Ori Sagi avatar
Written by Ori Sagi
Updated over a week ago

For an overview of model optimization metrics in Pecan, including general information about why and when to use them, see Introduction to selecting an optimization metric.

This article goes into greater detail about how to select and configure them, which metrics are available, and when you would want to use each one.

How to select an optimization metric for binary models

  1. Log into Pecan and open your predictive flow's Nutbook.

  2. Click the arrow on the Train Model button; this will open the “Model Settings” window.

  3. Under “Advanced Settings”, click Select optimization metric.

  4. Under “Model type”, select Binary model.

    • If you've already defined the label in your core table (via two distinct values in the “Label” column), Pecan will recognize that you're training a binary model and automatically select “Binary”.

    • If you haven’t defined the label and you leave this set to “Automatic” (by default), Pecan will automatically determine the type of model once the behavior is defined in your core table query.

  5. Select your desired optimization metric. For help making this selection, refer to this feature overview, and more specifically, the below table.

  6. If you select “Precision Rate” or “Detection Rate”, you’ll need to specify a group of entities to optimize the model for (on the basis of the chosen metric). Read more

  7. Click "Train model" to send the model to train with the current settings.

Which optimization metrics to use – and when

When you select an optimization metric, you’re telling Pecan to select the best-performing model (from among all trained models) on the basis of that metric.

Here are the optimization metrics you can choose from for a binary model, and an explanation of why and when you may want to choose each one:

AUC (Area Under the Curve)

AUC is the default optimization metric in Pecan and will be used if you don’t select a different one.

This metric scores how well a model is able to classify a population into two groups. This makes it particularly useful for imbalanced datasets, where there is a low base rate of target behavior (a.k.a. low “label rate”).

Selecting AUC will favor a model that’s able to distinguish between True Positives and False Positives, regardless of the model’s threshold, resulting in a model that performs well for the entire population.

Precision Rate

Selecting this metric will favor a model that maximizes True Positives and minimizes False Positives. In other words, it favors making correct positive predictions, even if it means failing to detect all instances of target behavior.

In general, a Precision Rate should be chosen when the cost of False Positive predictions is high (when taking action has a significant cost or risk), and you want to make sure you’re acting only on the right population.

When choosing Precision Rate, you'll indicate what percentage, quantity, or segment of the population to optimize the model for. Note: this metric may not be appropriate if you have a low label rate.

Detection Rate

Selecting this metric will favor a model that maximizes True Positives and minimizes False Negatives. In other words, it favors detecting all possible instances of target behavior, even if it means incurring more instances of False Positives.

In general, the Detection Rate should be chosen when the cost of False Negative predictions is high (when not taking action has a significant cost or risk), and you want to miss as few positive instances as possible.

When choosing Detection Rate, you'll indicate what percentage, quantity, or segment of the population to optimize the model for.

Accuracy Rate

Selecting this metric will favor a model that maximizes True Positives and True Negatives. In other words, it favors making more correct predictions overall (whether they are positive or negative), even if it means incurring more False Positives and/or Negatives.

In general, Accuracy Rate is appropriate when the cost of False Positive predictions is low (when the cost of taking action is low), and you want to capture as many positive instances as possible. However, it should only be used for balanced datasets.

Here’s an example of how it would be problematic for an imbalanced dataset. Suppose you’re building a model that predicts high-value players when they only comprise 2% of the population (a very low base rate a.k.a. label rate). A model that predicts all players are “Not High-Value” will have an Accuracy Rate of 98% (because it is correct 98% of the time), even though it fails to detect any high-value players and is clearly not a good model. Detection Rate would be more appropriate.

LogLoss (Logarithmic Loss)

Selecting this metric will favor a model that best predicts the probability of a particular outcome, rather than simply predicting the most likely class. This is because LogLoss penalizes incorrect predictions more heavily when the predicted probability is high. (Note: the lower the score, the better the model and the greater the confidence of its predictions.)

In general, LogLoss is useful when the cost of False Positive and False Negative predictions is not equal, since it accounts for the predicted probability of the correct class.

For example, a business that wants to call customers who have a probability of >50% of upselling may opt for this optimization metric. However, a business that simply wants to call the 500 most likely customers to answer regardless of their probabilities might opt for the AUC evaluation metric

How to choose which “entities to optimize for”

When choosing Precision Rate or Detection Rate as your optimization metric, you’ll need to define the population for which the metric will be calculated. This allows you to optimize the model for the group whose predictions you care most about, as determined by your business objective.

You may define this group by specifying a percentage, quantity, or segment of entities.

  • Choose “Top percentage” if you want to optimize the model for a defined percentage of entities that are most likely to express the target behavior (e.g. top 10%).

  • Choose “Top quantity” if you want to optimize the model for a defined number of entities that are most likely to express the target behavior (e.g. top 1,000).

  • Choose “Specific segment” if you want to optimize your model for entities with a particular column value (e.g. “current_membership” = “VIP”).

Example: “Precision @ 1,000”

Let’s say you’re planning to implement a costly ad campaign for the 1,000 players who are likely to generate the most revenue. Since the cost of False Negatives (taking action when it’s not necessary) is high, you choose Precision Rate as your optimization metric.

Considering that you’ll only be able to target 1,000 players, you’ll want to ensure a high level of correctness for your high-revenue predictions (a.k.a. maximize True Positives).

As a result, you choose to optimize the model’s Precision Rate for the 1,000 entities that have the highest “Label” values (closest to 1). This is done by selecting “Precision Rate” as your optimization metric, with a “Top quantity” of 1,000.

Example: “Detection @ 20%”

Now let’s say you’re planning to offer a small discount to the 20% of users who are most likely to churn. Since the cost of False Negatives (failing to take action when it necessary) is high – you choose Detection Rate as your optimization metric.

Considering, for example, that the cost of a customer churning is 200x greater than the cost of the discount, you’ll want to ensure a high level of coverage for churn predictions (a.k.a. minimize False Negatives).

As a result, you choose to optimize the model’s Detection Rate for the 20% of entities that have the highest “Label” values (closest to 1). This is done by selecting “Detection Rate” as your optimization metric, with a “Top percentage” of 20%.

Note: precision and low label rates

If your dataset has a low rate of target behavior (a.k.a. label rate), you may want to avoid using Precision Rate as your optimization metric. Here’s why…

Say you optimize your model for “Precision at 10%”, but the label rate is lower than 10%. Since the desired percentage of entities is higher than the label rate, the model won’t be able to “fill” all predictions with True Positives (because there aren’t enough of them). As such, only a limited number of your True Positives can be corrected, and you would not be able to achieve a high precision rate.

Therefore, in cases where you plan to take action for a percentage of the population that’s greater than the label rate, it’s better to optimize and evaluate the model on the basis of the detection rate. This metric will be more representative of the quality of the model, and you'll be able to achieve a detection rate close to 100%.

If you have any questions about selecting an optimization metric for your model, please reach out to a Pecan expert or your customer success manager.

Did this answer your question?