All Collections
Evaluating a Model
Legacy dashboard
Dashboard overview for binary models - legacy dashboard
Dashboard overview for binary models - legacy dashboard

Pecan legacy dashboard is still available if you wish to use it.

Ori Sagi avatar
Written by Ori Sagi
Updated over a week ago

This legacy dashboard is still available for the time being.

To toggle between this dashboard and the new one, click the ... button next to the "Use model" button at the top right corner:

Once your Pecan model is fully trained, you can view its performance in an interactive dashboard.

Pecan’s dashboard provides statistical information and tools so you can understand the accuracy of your model, tailor it to your business needs, monitor predictions over time, and understand the importance of different features in your data.

This article provides an overview of how to navigate and interpret the metrics in a dashboard for a binary model. For a more extensive explanation of each metric, see Model performance metrics for binary models.

To view the dashboard for any model you have created: log into Pecan, click the “Models” tab at the top of the screen, and then click “Models” in the left-side navigation.

Note that the metrics displayed in your dashboard will be for your Test Set, which is the the final 10% of training data that Pecan automatically sets aside during the training process. This set serves as fresh data that your model will test its predictions against in order to evaluate the predictive performance of your model.

Below is a breakdown of each dashboard component for a binary model, listed in order of appearance:

“Technical details” button

Click the “Technical details” button to view basic information about your model and when it will be run next. When you do so for a binary model, the below box will pop up:

Two bar graphs will display the distribution of results in both your Train Set and Test Set (the set that’s used to test the model once it’s been trained and validated.) If you notice a discrepancy, this may indicate that there is a issue with your model. For example: differing performance between them may indicate that the Train Set is not representative of your entire dataset.

“Code” button

Click here to see the code behind your model. The box that pops up will show:

  • the queries that were created for it (“input_query” panel)

  • how these queries were Parsed by Pecan to make them machine-readable (Level_one_query” panel)

  • statistics from the columns that exist in your dataset (“Analyzer_json” panel)

“Model health check” button

Clicking here will help determine whether there are any obvious red flags that may be affecting the performance of your model. Pecan will check that…

  1. the model’s Area Under the Curve (AUC) is greater than 0.6 and less than 0.9.

  2. none of the features has more than a 30% contribution towards predictions (see “Feature Importance” widget).

  3. the top 3 features, combined, do not have more than a 70% contribution towards predictions.

If any of these three criteria are not met, you’ll be provided with a short explanation and links to relevant help articles.

If everything checks out, you’ll see the following message:

Note: even if the above criteria are met, you’ll want to check that the model is producing results that are aligned with known business metrics. For example: do the model’s results approximate what you know to be true about your monthly churn levels?

“Use model” button

Click here once your are satisfied with the model and ready to start generating predictions. You will select how often you want the model to run.

Date filter

Next to the threshold buttons is a date filter. Clicking it will open a pop-up box, where you can define which part of the dataset will be represented in the dashboard.

Naturally, the prediction results and performance metrics will also be affected by the date range you choose.

  • All to date: include all data up until the End Date (defined as either a “fixed” date or a "moving” date [e.g. 2 weeks ago])

  • All from now: include all data beyond the Start Date (defined as either a “fixed” date or a "moving” date [e.g. 2 days from now])

  • Test Set period: include data from the Test Set (the final 10% of training data, which is set aside and then used to test the model against fresh data)

  • Custom: include data from a custom date range

If you want the selected range to always be displayed by default, select “Save as default” and click Set range.

Threshold settings

This is where you can configure your model’s sensitivity threshold (see Understanding threshold logic.)

You can choose from three preset options:

  1. Liberal: this sets a lower threshold, meaning that more instances of the target behavior will be detected, but that precision will be lower (i.e. more instances of “incorrect detection”).

  2. Neutral: this sets a balanced trade-off between precision and detection.

  3. Conservative: this sets a higher sensitivity threshold, meaning that fewer instances of the target behavior will be detected (more will be “incorrectly ignored”), but that precision will be higher (i.e. less instances of “incorrect detection”).

You can then customize these presets even further. To do so, click the pencil icon, and then drag the sliders to adjust the threshold. You can see how this looks below:

As you adjust the threshold, changes will instantly be reflected in the prediction results, Venn diagram, and precision and detection metrics in your dashboard.

Once you achieve a balance between precision and detection that best suits your business needs, click Set.

Venn diagram and performance metrics

Next to the title of your model (e.g. “Upsell demo”) you’ll see a Venn diagram, which represents the results of your predictions.

To interpret it and understand what each color means, see Venn diagram (predicted vs. actual performance).

Note that the Venn diagram reflects a tradeoff between two key metrics – precision and detection – which are displayed alongside it:

  • Precision Rate: indicates how precise the model was when predicting instances of the target behavior.

    • Calculated by: Correctly detected / (Correctly detected + Incorrectly detected)

  • Detection Rate: indicates the percentage of target behavior that was correctly detected by the model.

    • Calculated by: Correctly detected / (Correctly detected + Incorrectly ignored)

For a deeper dive into these metrics, see Precision Rate and Detection Rate.

“Predictions over time” graph

Whereas a Venn Diagram illustrates your model’s performance based on its Test Set, this graph uses the same metrics to illustrate your model’s performance over time. (To see the meaning of each color, see this article.)

The x-axis is determined by the prediction frequency of your model (e.g. daily, weekly, monthly.) In other words, the graph is updated once each prediction window has passed (once actual results have been fed back into Pecan and can be compared to the predictions.)

Some notes to help you use and understand this graph:

  • You can zoom in by dragging your cursor over a portion of it.

  • Each curve is color-coded to match the prediction results of the Venn diagram (above).

  • When you hover over a plot, the corresponding performance metrics will appear on the left-hand side.

  • Click the icon at the top-right corner to toggle between percentage of the sample population vs. number of instances. This will be reflected in the Y-axis and graph itself.

Also, whenever your sensitivity threshold is adjusted, this graph will be updated accordingly (along with the Venn diagram and precision/detection metrics in the dashboard.)

Detecting business impact

If you’ll be carrying out a business treatment based on a model’s predictions, the “Predictions over time” graph provides an opportunity to observe the impact of that activity.

Let’s say you’ve built a model to predict which customers are most likely to churn, and based on those predictions, provide those customers with a special one-time offer.

If the campaign is effective in reducing churn, you’ll be able to witness this in the “Predictions over time” graph. In this particular case, you’d see a higher rate of “incorrectly detected” churn (false positives) after the business treatment occurs. In other words, your model would increasingly predict churn in cases where none, in fact, occurred.

This means you’d want to see your actions negatively impact the performance of the model – but in a way that favors the desired target behavior (e.g. seeing less churn than was predicted).

Note, however, that when the performance of your model worsens, you will want to retrain it so as to improve the accuracy of future predictions.

Identifying data issues

In some cases, the “Predictions over time” graph can reveal a problem in your data. This may be the case when there is significant inconsistency in the proportion of correct and incorrect predictions, and it can’t be explained by a business treatment that was carried out (see above).

Let’s use at an example from an actual Pecan customer. In the below graph, you can see a sudden drop-off in the gray-colored population. This population represents the total number of entities who were correctly predicted not to perform the target behavior (a.k.a. “Correctly Ignored”/true negatives).

Their sudden disappearance (from approximately 26,000 to 0) reveals that a portion of the real-world data was missing when fed back into Pecan. We know that only a portion was missing because the graph still shows entities that were “incorrectly detected” to churn (a.k.a. false positives, in turquoise), as well as populations in purple and red.

To resolve this issue, data for the relevant dates can be re-imported into the model. Another option is to exclude entities whose prediction window overlaps with the problematic time frame.

Feature Importance Widget

This widget communicates the importance of the features that exist in your data, or, in other words, how strongly they contribute to your model’s predictions.

Listed on the left side of the widget are the top 20 features that contribute the most to your predictions, as determined during initial model training.

Clicking on each feature will load a Feature Importance Graph (a.k.a. Partial Dependency Plot) on the right side of the widget. This graph shows the effect of each feature on your model’s predictions.

For a comprehensive explanation of the widget and how to interpret everything in it, see The Feature Importance Widget.

Preview Table

Located at the bottom of the dashboard, this table displays the first 100 predictions in your dataset, including:

  • the customer ID

  • the prediction itself (where 1 = target behavior, and an integer represents the prediction)

  • the top 10 contributing features, according to their SHAP values.

The round icons are designed to help you identify, quickly and easily, which features contribute most strongly to each prediction, and in what direction:

  • An “up” arrow indicates that a feature contributes to a prediction of the target behavior occurring, while a “down” arrow predicts the opposite.

  • The darker the purple of the icon, the stronger the effect of that feature.

You can download the full output table to a spreadsheet by clicking Save as CSV.

Did this answer your question?