Monitoring Explainability and Interpretability Over Time
Was this section helpful?
A Unified Approach to Interpreting Model Predictions, Scott M. Lundberg and Su-In Lee, 2017Advances in Neural Information Processing Systems, Vol. 30 (Curran Associates, Inc.) - Presents the foundational Shapley Additive Explanations (SHAP) method, a widely used technique for interpreting model predictions by attributing feature importance.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier, Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin, 2016Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM)DOI: 10.1145/2939672.2939778 - Introduces Local Interpretable Model-agnostic Explanations (LIME), a method for explaining individual predictions of any classifier in an interpretable and faithful manner.