
Glassbox machine learning for high-stakes decision making and responsible AI.
InterpretML is an open-source Python library developed by Microsoft Research designed to provide deep transparency into machine learning models. As of 2026, it remains a cornerstone of the Responsible AI ecosystem, bridging the gap between high-performance predictive modeling and regulatory compliance. The framework's centerpiece is the Explainable Boosting Machine (EBM), a 'glassbox' model that utilizes generalized additive models (GAMs) with interaction terms. Unlike traditional black-box models like Random Forests or XGBoost, EBMs allow users to view the exact contribution of every feature and interaction toward a final prediction without sacrificing significant accuracy. Beyond EBMs, InterpretML serves as a unified interface for popular interpretability techniques like SHAP, LIME, and Morris Sensitivity Analysis. Its technical architecture is designed for seamless integration with the scikit-learn ecosystem, making it an essential tool for sectors requiring high accountability, such as fintech, healthcare, and insurance. By providing both global and local explanations, it empowers data scientists to detect bias, debug model behavior, and build trust with non-technical stakeholders in complex production environments.
A tree-based generalized additive model with automatic interaction detection (GA2M).
Verified feedback from the global deployment network.
Post queries, share implementation strategies, and help other users.
An integrated Plotly-based UI for exploring global and local explanations across multiple models.
Uses a FAST algorithm to find and include pair-wise interactions (x_i * x_j) in the model.
Allows developers to manually adjust feature contribution curves if they reflect bias or incorrect logic.
A standardized wrapper for any scikit-learn compatible model to apply LIME, SHAP, or KernelExplainer.
Implementation of Morris Sensitivity Analysis to determine which inputs most significantly affect output variance.
Integration with DP-EBM for training models with differential privacy guarantees.
Banks need to prove that loan denials are not based on protected classes like race or gender.
Registry Updated:2/7/2026
Doctors need to understand why an AI flag has identified a patient as high-risk for a condition.
Data scientists need to find out why a model is performing poorly on a specific segment of data.