Search⌘ K
AI Features

Real-Life Examples

Discover how AI fairness issues appear in real life through examples like skin cancer diagnosis, automated job screening, and the COMPAS recidivism risk model. Understand the effects of biased training data on different groups and the societal importance of addressing these fairness challenges.

Before we introduce new concepts and formalize AI fairness, let’s start with some examples to get some intuition. Our reasoning will be simplified but don’t worry—we will dive deeper during the course.

Skin cancer detection

Skin cancer can be a severe condition. Many people have skin marks that, at some point in life, can evolve into dangerous ones. However, if diagnosed early, it is mostly curable. However, a dermatoscopy performed by a doctor is required, which might not be accessible to all people. There are self-diagnostic criteria (like ACBDE, as shown in the following image), but in practice, it is difficult to perform by a non-expert. An AI-based system capable of performing reliable early cancer detection could help a lot.

The ABCDE (A: asymmetry, B: border (irregular), C: color, D: diameter (>6mm), E: evolving) rule helps to identify skin cancer. If it is satisfied, a mole should be examined by a professional.
The ABCDE (A: asymmetry, B: border (irregular), C: color, D: diameter (>6mm), E: evolving) rule helps to identify skin cancer. If it is satisfied, a mole should be examined by a professional.

The skin tone of the user can heavily influence detection. If the model was primarily trained on white Europeans, it might be inaccurate for Latin Americans. As a consequence, they could be heavily underdiagnosed (leading even to death) or overdiagnosed, introducing unnecessary anxiety (and costs for professional examination). If the model behavior differs significantly for users of various ethnicities, we have a fairness issue. The problem is especially important if the model is advertised as accurate—based only on European data—but distributed globally because it causes false beliefs.

Job applications

If a company’s number of job applications is huge, it might be tempting to automate the review process. The idea is okay but introduces a risk. If the model was built on historical data, it would likely reproduce human bias, and as a result, it may discriminate against a specific group. In this situation, the model is unfair because deciding to proceed to the next recruitment round can heavily depend on gender—even if all other features will be the same!

It can be even more visible when it comes to stereotypes. For example, a woman can be preferred for being a kindergarten teacher because of stereotypical associations, even if qualifications stand precisely opposite in the specific situation.

COMPAS

Now, we proceed to the big one. Replacing courts with AI is a controversial topic. But there are already tools that try to support decisions in this area. A notable mention is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) used in the US. The goal of the system is to predict recidivism risk. Without going into too much detail, a concern is raised that the model is unfair. An analysis performed by ProPublica claims that the model is two times more likely to label black defendants as future criminals incorrectly.

This example is crucial as incorrect predictions can influence not only the affected suspect but also the whole society (and trust in the judicial system).

More examples

Can you imagine other examples of situations where a model can be discriminative?