Responsible AI Practices: Interpretability
In addition to being fair, AI should be understandable.
We'll cover the following
Interpretability
AI systems should be understandable.
We touched upon this topic when talking about creating AI solutions that users trust. Interpretability, or explainability, plays a crucial role when it comes to trusting AI systems.
Automated predictions and decision making are becoming integrated into our lives more and more. In some scenarios, it can be acceptable to have AI systems that make decisions we do not fully understand. For example, if a healthcare application classifies an image as cancerous, the patient wouldn’t likely be interested in knowing how the algorithm is able to make those predictions; whether the deep learning model was trained using biomarker levels or skin color of the patient, wouldn’t be their primary concern. However, if there is a loan application that denies legit requests or a surveillance system that shows bias in classifying people, then in order to establish the human-AI trust, we want to have a better understanding of how those decisions are made. In the latter scenarios, we want to be able to explain the model outputs to stakeholders, and we cannot get away with black-box or opaque models.
Get hands-on with 1200+ tech skills courses.