Skip to main content
The one-label training challenge

Ensure balanced data sets in models - key to successful training and accurate predictions

Ori Sagi avatar
Written by Ori Sagi
Updated over 5 months ago

When training a classification model, such as predicting customer churn, it's crucial to have a diverse set of data for testing. Imagine you're trying to identify patterns in customer behavior - specifically, who will 'churn' (leave your service) and who won’t. Your model needs examples of both outcomes to learn effectively.

But what happens if your data set only contains examples of customers who either only churned or didn't churn? This scenario is like trying to understand the full story of a book by reading only half the pages.

The Importance of two labels in test sets

Sometimes the single label issue only occurs with the latest data you have. For example, if you want to check if someone will churn in the upcoming month, you still don't know who churned in the last month - all the samples will be "not churned." This can affect the test set and prevent the model from finishing its training.

What is a test set?

Once a model is sent to train, Pecan automatically splits your data into train, validate, and test sets. The latest and freshest period of your data is used as the test set to make sure your model is being tested on the most recent events, and you can get a good sense of how good it is. Read more about the split here.

For a robust and accurate model, your test set must include a mix of both outcomes: customers who churned and those who didn't. This diversity is essential for the model to learn the nuances and differences between these two groups.

Solutions for a one-label data set

If you find yourself with a test set that only contains one type of label, here are two strategies to consider:

  1. Adjust Your Predictive Question: Sometimes, the formulation of the predictive question can skew the data toward one label. For example, if you're predicting churn based on a very short time horizon (one day after the moment of prediction), you might not capture enough variation. Extending this horizon can bring more balance to your test set.

  2. Data Audit: Take a closer look at your data. Ensure that both outcomes - churned and didn't churn - are adequately represented. It's about having a balanced view to give your model a fair chance to learn effectively.

  3. Randomize your core set table: If your use case had no time element, you can use the ORDER BY RAND() clause in your core_table SQL to randomize the order of lines, so that the last 10% of the rows (that are used as the test set) will have both labels.

Conclusion: A balanced approach for better predictions

In conclusion, a well-balanced data set is critical for training effective classification models. By ensuring that both possible outcomes are represented in your data set, you're setting the stage for more accurate and reliable predictions. Remember, it's about teaching your model to understand the full spectrum of customer behaviors.


Remember, at Pecan, we're committed to empowering you with the tools and knowledge to unlock the full potential of your data. If you have any questions or need assistance, our team is always available on chat to help. Happy modeling!


Did this answer your question?