All Collections
Evaluating a Model
Legacy dashboard
How do you know if your model is good? (legacy dashboard)
How do you know if your model is good? (legacy dashboard)
Ori Sagi avatar
Written by Ori Sagi
Updated over a week ago

When training or using a predictive model, you may find yourself asking: “How do I know if my model is good – or at least, good enough?”

Every trained Pecan model has a metrics dashboard that communicates the model’s performance. However, this does not necessarily tell you whether your model is good – that is, whether it can be used to make decisions that drive actual business value.

Therefore, let’s phrase the question a bit more precisely: “How do you know when having a model is better than having none at all?”

This is a challenging question if you have no business logic or models to compare your results to. However, it can be answered by comparing multiple approaches on the basis of model performance and business outcomes.

The answer

Pecan’s goal is to help you achieve real business value by using predictive ML models. This value is realized when a Pecan model demonstrates greater value than what would be achieved through traditional BI tools and methods (and possibly no model at all).

Data analysts generally know how to split populations into groups, select independent variables, and conduct experiments. This means they’re capable of building simple models (a.k.a. naive models) that can predict customer behavior. For example, an analyst could identify two or three key variables in a dataset (e.g. gender, zip code, last purchase date) and use them to predict which customers are likely to carry out a target behavior. Such a model could be a linear regression model based on a single feature, or a simple one-dimensional tree (in contrast to Pecan models, which are complex tree-based models).

The results of this model can then serve as a benchmark for comparison against your Pecan model. How, exactly?

  1. The predictions generated by each model can be evaluated and compared on a statistical basis (i.e. using accuracy-based metrics).

  2. Each model’s predictions can be used to conduct a business treatment. Once each treatments has been carried out, you can compare the final business impact of each model (e.g. how successful it was in maximizing revenue). Let's dive into an example of how this might look…

A case study

One way to assess the value of your Pecan model is by conducting an A/B test. Here, you would make predictions for one population based on a Pecan model, and make predictions for an equivalent population based on a naive model (as explained above).

You would have two primary ways to compare these models – on the basis of statistical performance and on the basis of business outcomes.

1. Comparing based on statistical performance

Say you want to predict the likelihood of customers to upgrade to VIP status. You can make predictions for one population using a Pecan model, and make predictions for an equivalent population using a naive model.

Once the prediction window – the period for which you made predictions – has passed, you can compare each model’s predictions to the observed results for the corresponding population. This enables you to compare the statistical performance of each model.

Of course, it is important to choose an appropriate evaluation metric – you can do this based on whether it’s a binary or regression problem, and based on your particular business need or goal. For example: if you want to improve your the efficiency of your call center, which is able to handle 1,000 calls per month, you could evaluate the model by “Precision @ 100”. In Pecan, precision refers to a model’s ability to correctly predict instances of target behavior – in this case, among the top 1,000 entities who are predicted to perform the behavior.

Once you have done a statistical comparison, you’re able to answer the question: which model was able to predict customer behavior or outcomes more accurately?

2. Comparing based on business outcomes

Now that you know how accurate each model is, you can assess its usefulness in the real world. Say you were to carry out a business treatment by offering customers a trial VIP subscription. Which population would respond better to this? Would you see greater conversion among the those predicted to behave a certain way based on the Pecan model, or based on the naive model? Do the results align with the assumptions made by those models?

By carrying out and comparing business treatments for each model/population, you may discover that you're able to generate a higher conversion rate (or upsells, retention, revenue, etc.) by using a Pecan model than by relying on other methods. This quantitative difference is where Pecan’s true value resides. And when dealing with large numbers of customers and/or dollars, the impact can be massive.

Pecan helps you establish benchmarks

Your Pecan model is only as good as its ability to drive real-world business value. This is why we see benchmarking as an important step in demonstrating the value of your Pecan model.

Pecan can provide you with a benchmark model that you can compare your model against. To create this benchmark, we take the single strongest feature in your model (as determined by machine learning), and then create a simple linear regression model based on it.

For example: we can create a benchmark model that uses the single feature of “spend per month” in order to predict the likelihood of customers to upgrade to VIP status. Using the methods described in the above case study, you can then compare the results of this model to those of a Pecan model, which leverages machine learning and uses up to hundreds of features.

Below is an example of an actual comparison between a Pecan customer’s model and a benchmark model. It shows how each model performs, on the basis of precision, for the top 1% (2%, etc.) of entities who are predicted to perform the target behavior:

If a naive model appears to perform as well as, or even better than, your Pecan model, and there were no underlying issues with your datasets, this may indicate that you don’t need a tree-based machine-learning model for your business needs.

If you would like to receive a benchmark model to measure your Pecan model against, or would like guidance on how to compare your Pecan model against other data-based approaches, get in touch with your customer success manager and we’ll be happy to help.

Did this answer your question?