Interpreting Explanations for Classification Models
Was this section helpful?
"Why Should I Trust You?": Explaining the Predictions of Any Classifier, Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, 2016Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16) (ACM)DOI: 10.1145/2939672.2939778 - The foundational paper introducing LIME, outlining its theoretical basis and algorithm for local, model-agnostic explanations.
A Unified Approach to Interpreting Model Predictions, Scott M. Lundberg and Su-In Lee, 2017Advances in Neural Information Processing Systems 30 (NIPS 2017), Vol. 30 (Curran Associates, Inc.) - The original paper that introduced SHAP values, unifying various explanation methods and grounding them in Shapley values from game theory.
SHAP Documentation, Scott Lundberg and SHAP contributors, 2024 - Official documentation for the SHAP Python library, offering practical implementation details, examples, and descriptions for various visualization types.