All Collections
Evaluating a Model
Binary models
Dashboard overview for binary models
Dashboard overview for binary models

Pecan's dashboard offers accuracy insights & customization, empowering you to optimize model performance & surpass benchmarks.

Ori Sagi avatar
Written by Ori Sagi
Updated over a week ago

Once your Pecan model is fully trained, you can view its performance in an interactive dashboard.

Pecan's dashboard provides statistical information and tools to help you understand how accurate your model is, tailor it to your business needs, monitor predictions over time, and discover the importance of different columns in your data.

It is important to remember that the metrics displayed in your dashboard are for your Test Set, which is the latest 10% of the training data that Pecan automatically sets aside during the training process. Pecan uses this data as "fresh data", asking your model to create predictions for it and then testing the predictions against it to evaluate the predictive performance of your model.

Below is a breakdown of each dashboard component for a binary model, listed in order of appearance.

Model Evaluation Tab

Head metric & Quality comparison test

The first widget displays the Precision of the model and compares a benchmark model and a random guess.:

  • The benchmark is a simple rule-based model created using a single column from the data that strongly correlates to the Target column. It serves as a reference point to ensure that the model has sufficient predictive power and can outperform the benchmark.

  • The random guess, on the other hand, assumes no logic and predicts the positive class based on its rate in the data. For example, if the Churn rate is 10%, randomly selecting 100 entities would provide 10 Churn entities, resulting in 10% Precision. The random guess is also important as it shows how much better the model is compared to a situation where no sophisticated logic is used to detect the positive class.

Advanced Performance Details

Precision & Recall

This widget presents both the Precision and Recall metrics of the model and offers insights into the number of entities utilized in the test set to evaluate the model. Additionally, you can modify the head metric to be either Precision or Recall within this section.

Threshold Selection

The threshold is a parameter in machine learning that determines the cutoff point for classifying predicted outcomes as positive or negative. It can be adjusted after training the model to meet specific business needs. Pecan allows you to adjust the threshold for optimal model performance.
By changing the threshold, the proportions of predicted positive and negative outcomes can be altered, impacting overall model performance.

It's important to note that adjusting the threshold doesn't change the model itself but instead changes the probability score used to classify predictions into two classes. Pecan sets a default threshold based on the optimal precision and recall balance to provide the best results.

The graph illustrates the distribution of probability scores for entities in negative and positive classes. A clear separation between the classes indicates that the model can effectively distinguish between them and assign low probability scores to the negative class and high probability scores to the positive class.

Confusion Matrix

A confusion matrix is like a report card for our model. It tells us how well our model did in predicting the two classes. The confusion matrix has two main sections: Predicted as Negative and Predicted as Positive. The threshold defines the proportions between the two.

Under the two sections, there are four elements, which are determined by the scores provided to the entities by the model:

  • True '1': This is when the model correctly predicts that something is positive.

  • False '1': This is when the model incorrectly predicts that something is positive.

  • True '0': This is when the model correctly predicts that something is negative.

  • False '0': This is when the model incorrectly predicts that something is negative.

The confusion matrix shows us how many times our model made each of these types of predictions. By looking at the confusion matrix, we can see how well our model is doing and whether it is making more mistakes with false positives or false negatives.

Column Importance

When your model is trained, it uses the columns from the Attribute tables to find common patterns and similarities of the Target population. The model assigns different weights to the columns according to the impact they had on predicting the Target.

The importance of each column is calculated by summing the importance of all the AI aggregations (also known as features) that were extracted from the column.

For a comprehensive explanation of the widget and how to interpret it, see Understanding Column importance.

Model Output Tab

Located at a different tab, this table displays a sample of 1,000 predictions in your dataset, including:

  • EntityID & Marker

  • Actual value

  • Predicted value

  • Error (predicted value - actual value)

  • 10 most contributing features to the prediction (when clicking a row).

You can download the full output table to a spreadsheet by clicking Save as CSV.

For more details, see this article: Understanding Explainability & Prediction Details.

Training Overview Tab

This tab allows you to dive deep into the model's logic and see how each value in your data affects the probability score.

Entities Overtime

Allows you to easily spot possible issues with the number of samples used in your model over time - overall quantity, trends and patterns, gaps or drops.

For example, see that you didn't miss any months by mistake, or ensure you've included the entire period you wanted to include.
The graph also shows the split between the train and test sets, allowing you to see the amount of samples in each set.

Target overtime

Get insights into the stability and behavior of the label column in your prediction model over time to help you ensure it aligns with the data you're already familiar with.

For example, if your conversion rate is usually around 27%, see that the graph shows a similar number throughout time and identify times when it's higher or lower.
The graph also shows the split between the train and test sets, allowing you to ensure that the trends stay consistent between the two sets.

Top Features

Clicking on each feature will load a Feature Affect Graph (a.k.a. Partial Dependency Plot or PDP) on the right side of the widget, displaying a graph based on the SHAP values. This graph shows the effect of each feature and its values on your model’s predictions.

☝️ Remember:
ML models are VERY complex, and you cannot attribute an impact to a specific feature or a value, as they work together with numerous other features to get to the final probability score.

A PDP graph for a continuous variable
A PDP graph for a categorical variable

The graph shows the top 10 categories or a value histogram, their average impact, and their maximum and minimum impact on the probability score.

Attribute Tables

Get an insightful, tabular view of the analysis conducted on your data attributes, providing you with a deeper understanding of how your data is structured and utilized in model training.

Before diving into the details, it's crucial to remember that the analysis presented in this widget is based on your train dataset, which is about 80% of your entire dataset. This means the figures might appear smaller than anticipated, as they don't represent the full dataset.

The widget provides a comprehensive overview of each table used in your model’s training. Here's what you can discover at a glance:

  • Row and Column Count: Understand the size and complexity of your table with the total number of rows and columns.

  • Column Types: Get insights into the composition of your table with a count of date, category, and numeric columns.

  • Dropped Columns: See how many columns are not utilized in model training, including the count and the reasoning behind their exclusion.

  • Entity Row Distribution: Discover the range of rows per entity, revealing the relationship type (1:1 or 1:many) within your data, in the structure of [min]-[max].

For an in-depth understanding, you can expand each table to view specific details about its columns:

  • Column Name: The actual name of the column as it appears in your schema.

  • Original Type: The data type assigned to the column in your DWH, providing a glimpse into its original format.

  • Pecan Transformation: How Pecan interprets and utilizes each column for its feature engineering process. If a column is marked as "dropped," you’ll also see why it wasn’t used for training the model.

  • Unique Values: The count of distinct values within a column, reflecting its diversity.

  • Missing Values: The number of NULL or missing entries, crucial for understanding data completeness.

Did this answer your question?