All Collections
Evaluating a Model
Understand your model's performance and metrics
Dashboard Overview For Binary ModelsPecan's dashboard offers accuracy insights & customization, empowering you to optimize model performance & surpass benchmarks.
Evaluation metrics in binary modelsBinary classification models use confusion matrix to calculate precision & recall rates, determining model effectiveness & accuracy.
Understanding Probability ScoreBinary classification in ML: Models predict entity classes with probability scores, and thresholds customize predictions for business needs.
Dashboard Overview For Regression ModelsExplore your Pecan AI model's insights with a dashboard: track performance, compare predictions, analyze feature importance, and more!
Model performance metrics for regression modelsMaster regression model evaluation with Pecan's diverse metrics: MdAPE, MAPE, WMAPE, WMPE, R2, and more. Precision in every prediction!
Understanding Pecan’s BenchmarksBenchmarks evaluate ML models by comparing them to rule-based models, to understand their performance and communicate value to stakeholders
What is data leakage and how can you prevent it?Avoid data leakage in ML models: Use only pre-prediction data to prevent "future peeking" and ensure valid, accurate outcomes.
What is overfitting?Overfitting is when a model memorizes training data instead of learning patterns. Resolve it by reducing attributes and adding more data.
What is underfitting?Underfitting occurs when a model fails to capture the underlying patterns in data, leading to poor performance on both training and new data
How do you know if your model is good?To determine your model's performance, we compare its lift to random guess and benchmark models. You can also run A/B tests and outcomes.
How to determine if your model is healthy?Think of Pecan's health checks as a model's doctor visit. From data diets to overfitting sniffles, we ensure your model is in tip-top shape!
Understanding threshold logicThresholds in binary classification models balance precision & recall, affecting model performance. Adjust based on business needs & costs.
Understanding Column importanceA peak into the "black box" of a model: Key to unlocking model insights and optimizing predictions by weighting features' impact on outcomes
What is Label stability?Tackle Label Drift in ML models with Pecan: Detect shifts, adapt training, and monitor for precision in ever-evolving data landscapes.
Maintaining Feature Balance in Machine Learning ModelsFeature or column importance in ML models gauges predictor significance.
Understanding Explainability & Prediction DetailsEntity-level explainability is a great tool for understanding and interpreting ML models, improving them and even help finding errors.
SHAP valuesSHAP values quantify feature impact in ML models, revealing key drivers in predictions and aiding in data-driven decision-making.
Model performance metrics for binary modelsLearn about binary model metrics: Base Rate, Precision, Detection, AUC, LogLoss guide accurate, balanced predictions for distinct classes.