Responsible AI Practices: Fairness

This lesson discusses how to fairly utilize AI.

We'll cover the following

Fairness

AI systems should treat all people fairly.

AI systems are enabling new experiences every day. We have moved beyond just recommending books and movies. AI systems are increasingly being used for tasks that are more sensitive and that have a great impact on society, like diagnosing medical conditions, selecting job candidates for interviews, deciding which customers should get a loan, and not running over pedestrians. Any unfairness in such systems can have huge consequences. For example, if the resume-screening algorithm is discriminating in favor of male candidates, because according to past data men predominantly held the roles for that position, women applicants miss the opportunity to even land the interview, and we end up with even more biased and non-inclusive environments. From examples like this, we can clearly see that as the impact of AI increases, it is critical to work towards systems that are fair and inclusive for all.

Addressing fairness and inclusion in AI is not an easy feat, and it is an active area of research. However, there are some approaches and steps that can be adopted to promote fairness.

Designing smart applications with concrete goals for fairness and inclusion in mind:

  • For example, if your team is working on an automated resume-screening application, they could use a metric that ensures a balanced selection of candidates across genders. However, simply increasing the proportion of women candidates might lead to some other unintended biases. Therefore, it’s important to study the impact of different choices before making business decisions. For example, you might want your team to conduct a study to understand the impact of gender first. In case of highly sensitive applications, it is advisable to engage with relevant experts, like social scientists, to understand and account for diverse perspectives.
  • Think ahead and consider the future impact of your technology by addressing concerns like: "Whose views are represented? Who is being left out? What outcomes does this technology enable and how do these compare for different users and communities? What biases, negative experiences, or discriminatory outcomes might occur?" Just as model performance should be monitored throughout the life-cycle of the product, the fairness of the AI model should also be monitored to avoid the emergence of biases over the long run as a result of new data feeding into the system over time.

Using representative datasets and features:

  • During feature selection, data scientists select some elements of the data to train the model. For example, if for the screening systems they are training the model based on length of time in a given job position, they are not accounting for job disruptions related to maternity leaves or military duties. To minimize such issues, measuring model performance on a wider range of metrics and curating features by keeping larger business goals in mind can lead in the right direction.
  • When working on creating personalized experiences, don’t use features that are not appropriate to personalize content with, or that may help propagate undesired biases. For example, anyone with similar financial circumstances should see the same personalized recommendations for financial products.
  • Understand biases that may exist in features that are sourced from editors, algorithmic tools, or users themselves.
  • Unintended biases could be checked by training and evaluating models based on metrics that include different subgroups.
  • A pool of trusted, diverse testers could be used to adverse test the system, and incorporate a variety of adversarial inputs into unit tests.

Get hands-on with 1200+ tech skills courses.