In machine learning models that continuously generate predictions on new data, maintaining the stability of the input data over time is critical.
As highlighted in our previous article, "Splitting the data into training, validation, and test sets," Pecan's platform allocates 80% of the data for training, 10% for validation, and 10% for testing. The training set forms the foundation from which the model learns patterns and relationships between variables, thereby predicting future outcomes (the "label").
When data changes over time, the model may struggle to apply the insights it learned from the training set to new, unseen data. This is why ensuring that data, particularly the label, remains stable over time is essential. If the label changes, this phenomenon is referred to as "Label Drift," which necessitates specific remedial actions.
Label drift can appear in different forms:
Gradual Drift: A slow, progressive shift over time.
Sudden Drift: An abrupt, immediate change.
Recurrent Drift: Periodic alterations that recur at specific intervals.
Blips: Temporary, brief changes that revert quickly.
The impacts of label drift can often degrade model performance. A model trained on older data may fail to predict accurately when applied to new data. Label drift typically occurs in dynamic environments or in situations where trends shift frequently.
How to Detect and Fix Label Drifts?
At Pecan, we employ robust methods to detect and address label drift. Our primary approach involves identifying potential correlations between time and the label. If such correlations exist, it indicates that the label's behavior varies across different periods.
If a correlation between the label and time is found, we will automatically provide a recommended starting date for training the model. This date is carefully chosen as the point where the label begins to stabilize.
Through proactive monitoring and robust strategies, it's possible to manage label drift effectively, ensuring your model remains accurate and reliable in its predictions.
Here are some suggested actions:
Sometimes Less-is-More: at times, it may be better to train the model with less data, particularly from the point the label starts to stabilize. Pecan will suggest this cutoff point automatically.
Adapting to Time-Based Changes: If your data is time-based and undergoes regular changes, consider updating your model more frequently to adapt to these shifts.
Regular Monitoring: Keep a close eye on your model's performance and the data it's trained on. Early detection of drift can lead to timely interventions and performance preservation.
However, how one addresses label drift can vary, and not all strategies may apply to every model.
If you suspect your model may be experiencing label drift, don't hesitate to contact the Pecan team. We're always here to help with further investigations and explorations, ensuring your model performs at its best 💪