Underfitting is the inverse of overfitting, meaning that your model is unable to make predictions due to an insufficient amount of training data. It's like trying to solve a puzzle without all the pieces; you won't be able to see the full picture.
Since the model has been trained on a dataset containing too little data, it’s unable to detect patterns in the data and identify effects that would be caused by it.
How to identify underfitting in Pecan
Underfitting is expressed by low metrics (Precision & Recall), which don’t provide a significant lift over a benchmark model and over a random guess:
In Pecan, underfitting can be resolved by adding more training data to your model:
Add more rows to the data you trained your model on (for example, by using a longer period)
Add more attributes to enrich your existing rows
Please note - since Pecan already trains several different models and chooses the best one, simply re-training your model without modifying your inputs won’t be effective.