Case Study: Getting Started with Model Explainability

Learn to build regression and decision tree models to predict house prices and explain the results.

Imagine that a machine learning model is like a really smart assistant making predictions for our business, such as forecasting sales or predicting customer behavior. Now, when this assistant explains its predictions while making them, it’s giving us intrinsic explanations. It’s like having a conversation in real time, and the assistant is telling us why it thinks a certain outcome will happen based on the data it’s analyzing.

On the other hand, if our assistant first makes a prediction and then explains why it made that prediction afterward, we call those post hoc explanations. It’s like our assistant is reflecting on its decision and providing insights into it.

Both types of explanations are valuable for our business. Intrinsic explanations help us understand the thought process at the moment, which is crucial for quick decision-making. Post hoc explanations offer a deeper dive, allowing us to review and learn from past predictions to improve strategies and decision-making in the future.

A linear regression or decision tree model is intrinsically explainable, whereas a neural network architecture-driven model cannot be explained on its own.

These models are less complex and easy to understand. However, in many practical scenarios, they struggle to capture the complexities of patterns in real data.

In this lesson, we focus on building linear regression and decision tree models to predict house prices. We will observe the ease with which end results are explained and how they struggle with overall performance.

Get hands-on with 1200+ tech skills courses.