All Collections
Evaluating a Model
Advanced topics
Understanding Explainability & Prediction Details
Understanding Explainability & Prediction Details

Entity-level explainability is a great tool for understanding and interpreting ML models, improving them and even help finding errors.

Linor Ben-El avatar
Written by Linor Ben-El
Updated over a week ago

In the world of machine learning, transparency and interpretability are crucial.
Understanding how decisions are made and the factors contributing to those decisions are key components in building trust in your model.

In this article, we will explore why explainability is valuable, delve into the calculation of explainability in Pecan (using SHAP values), and provide insights on how to interpret this valuable feature.

Interpreting & Leveraging Entity-Level Explainability

In Pecan, once the model is trained, you can explore a sample of the test set under the "Model Output" tab. In the table, each record of prediction is provided with an explainability of its top 10 most contributing factors. Those factors are based on the calculation of SHAP values (read more about SHAP values here)

To effectively interpret entity-level explainability, you can compare the features and values across multiple entities to identify patterns and consistencies. This analysis helps to understand which features have consistent impacts across entities and which are entity-specific.

Entity explainability can also be used as a feedback Loop: you can utilize entity-level explainability as a feedback mechanism to iterate on model development. By identifying influential features and assessing their alignment with domain expertise, developers can refine the model's behavior and performance.

The Value of Explainability

Explainability serves as a gateway to understanding the inner workings of machine learning models. By providing insights into the decision-making process, explainability offers numerous benefits:

Transparency

Explainability helps to comprehend how a model arrived at a particular decision or prediction. This transparency builds trust and confidence, allowing you to validate the model's outputs.

Error Analysis

Understanding the factors influencing model predictions can aid in error analysis.
By identifying cases where the model performs poorly, you can focus on improving the model's performance or addressing potential biases.

Model Improvement

Explainability highlights the strengths and weaknesses of a model.
By analyzing the contributing features, developers can identify areas for improvement, refine the model's architecture, or adjust training data to enhance overall performance.

Conclusion

Entity-level explainability is a great tool for understanding and interpreting machine learning models. It helps you see how the model works, find errors, and improve it.

With SHAP values, you can assess feature importance fairly and consistently. By understanding these values, you can make better decisions, validate model outputs, and improve the model, alongside boosting trust, accountability, and satisfaction in the model.


Did this answer your question?