This device is not compatible.


Explainable AI: The Power of Interpreting ML Models

In this project, we will learn how to explain machine learning models globally and locally and learn different methods to determine the most important predictors of an outcome. These methods can explain why a model made its predictions and guide us in how to change undesirable outcomes.

Explainable AI:  The Power of Interpreting ML Models

You will learn to:

Explain machine learning models using global methods

Explain machine learning models using a local explanation method, SHAP, or SHapley Additive exPlation

Explain a logistic regression model using coefficients

Explain a tree-based model using feature importances

Explain a neural network using permutation importances


Machine Learning

Data Science

Explainable AI


Hands-on experience with Python and Jupyter Notebook

Basic understanding of how to fit the scikit-learn models to data

Hands-on experience with pandas


SHAP logo





Project Description

Explainable machine learning, or XAI, aims to interpret machine learning models to uncover the most influential predictors of an outcome, ultimately enhancing transparency in predictive analytics. The primary goal of this project is to employ explainable machine learning techniques to explain the decision-making process of three distinct models: Logistic Regression, Random Forest, and Neural Networks. These three models were chosen to show three distinct ways of explaining a model, namely intrinsic, feature importances, and permutation importances. By working with the UCI Census Income dataset, we aim to predict whether an individual earns more than $50k/year.

This project also shows the difference between global and local explanation methods. Whereas global explanations show the most important predictors on average for the entire group, local methods aim to explain why a model made a prediction for an individual. Local methods are valuable for cases such as a client being rejected a loan—the company could provide an explanation of why the client was rejected. The client then has specific knowledge on what to change to improve their chances of obtaining a loan. For the local explainable methods, we focus on applying SHAP or SHapley Additive exPlanations.

Project Tasks



Task 0: Getting Started

Task 1: Import Libraries

Task 2: Prepare the Dataset


Global Explanations

Task 3: Explain a Logistic Regression Model using Coefficients

Task 4: Explain a Random Forest Model using Feature Importances

Task 5: Explain a Neural Network using Permutation Importances


Local Explanations

Task 6: Local Explanations using SHAP