Search⌘ K
AI Features

What Is AI Fairness and Why It Matters?

Explore the concept of AI fairness, understand its vital role in ensuring algorithms treat all groups justly, and recognize real-life impacts of biased models. Learn about the challenges, ethical considerations, and responsibilities involved in measuring and mitigating algorithmic bias to build fairer AI systems.

What is AI fairness?

Algorithms surround us in our daily lives. They decide what we see on social media, what movie we should watch next, and what we buy. But there are more serious things. When we pay in the online shop, an algorithm authenticates our payment by comparing it to typical transactions. When we buy car insurance, a model determines the payment amount based on various attributes (like experience, age, and past accidents). It would be nice if those algorithms were reliable and just. But we all know that these models are not always perfect and sometimes make mistakes.

But what will happen if someone has bad luck and algorithms are constantly wrong about them? There can be multiple reasons. Maybe one is a member of a minority group, and the model has not seen too many people like them. Or maybe there is a stereotype about them in the data, and the model learned to reproduce it. There can be multiple reasons why models can be wrong.

AI fairness is a field of research focused on measuring and mitigating a systematic, algorithmic bias. It is a relatively new study area, so we may not expect standardized procedures and well-established details about it. Right now, it is more like an experiment. But the topic is so important that the lack of such details and material is not a reason to ignore the problem.

There is one thing that needs to be articulated very clearly. AI fairness is not about making the same number of positive predictions for each group. The goal is to ensure that there is no group of people who are treated unfairly because of sensitive attributes. For example, a hiring model is not expected to hire the exact same number of young and older people. Instead, it is expected to not say, “You are old, so I won’t hire you.” The course contains a dedicated section about measuring fairness correctly, as the topic is pretty complex.

It is OK to make some errors. Just make sure an error probability does not depend on skin tone.
It is OK to make some errors. Just make sure an error probability does not depend on skin tone.

Motivation

There needs to be more clarity about why we should care about AI fairness. Let’s discuss it in detail.

Is a regular evaluation enough?

Unfortunately, no. Imagine the following scenario: There is a facial recognition system for entering the office. The model was built using employees’ faces. It works great (for now, ignore what metrics are optimized). However, a small group of employees from different countries constantly have issues with authentication. Overall metrics show that the system has an acceptable level of performance—but for this specific minority, it does not work. A thorough fairness diagnostic is needed on top of regular system evaluation to avoid such a situation.

Who is responsible for biased data?

It is not about someone’s fault. The data itself can be heavily unfair, and there are situations when we cannot change it (but if we can, we should). Still, while creating a model, we should do our best to remove the issue or at least measure it and be aware of it.

Can removing bias decrease overall model performance?

Yes, this might be the case. Various mitigation methods can reduce general model accuracy measured on the dataset. Therefore, we need to watch a variety of metrics closely. It is not a reason to give up. Safety systems in cars are expensive but still needed.

Can scores be artificially improved for discriminated groups?

Not at all. We will not perform fixes like increasing the score for a minority group by 0.2. They do not solve the problem, and they can reverse the issue in extreme situations. A group without an artificial boost can become discriminated against.

What can be done in case of fairness issues?

It is possible that the unfairness cannot be removed. Depending on the specific situations, we can consider different solutions. Let’s imagine a system for diagnosing eye diseases with retina images. Assume that it works well for people with dark eyes but works poorly for others. It would be very unethical to deploy the solution and advertise it as accurate. If a model has such an issue but is still helpful for a group of users, we may consider launching it with a clear message when it works and when it does not. In the meantime, we can improve the results for the second group.

Disclaimer: This course may include sensitive or potentially offensive terms and phrases; these are used strictly for educational or illustrative purposes. Educative does not endorse or encourage the expression of such sentiments in any way. Our intent is to shed light on these issues, promoting awareness and understanding rather than to cause harm or discomfort. Viewer discretion is advised.