Search⌘ K
AI Features

LIME

Explore how LIME approximates complex neural network predictions locally using linear regression to generate saliency maps. Understand its model-agnostic approach and see practical implementation with visual examples. Gain insights into strengths and limitations of LIME for interpreting deep learning classifiers.

Local interpretable model-agnostic explanations

Local interpretable model-agnostic explanations (LIME) is a technique that can explain the predictions of any classifier or regressor by approximating it locally with an interpretable model. It modifies a single data sample by tweaking the feature values and observes the resulting impact on the output. The output of LIME is a saliency map representing the contribution of each feature to the prediction.

Given a data point XX, a neural network f(.)f(.) can be written as a linear function in the local neighborhood of XX. In other words:

where Xf(X)\nabla_Xf(X) is the gradient of function f(X)f(X) with respect to XX, and (.)T ...