Introduction to Mitigation Methods
Observe the types of methods for getting rid of unwanted bias.
We'll cover the following
Recognizing potential sources of bias, let’s explore possible solutions. There are multiple methods for addressing unfairness in models that can be used during various stages of model creation:
Data collection
If we have the luxury of influencing the data collection process, we can act at the very beginning of the pipeline, saving us from numerous issues down the road. Careful dataset creation is one of the best methods for increasing model fairness. However, there is no single procedure to follow because it heavily depends on data characteristics and the problem we are solving. Nevertheless, we can identify a few good practices:
Be aware of sources of bias. Even though we may have good intentions, all humans are prone to various cognitive biases that can affect our way of thinking. Being aware of them is the first step to avoiding them.
Ensure diversity in data sources. If our model relies on individuals’ characteristics, do we include people with various ethnicities, socioeconomic statuses, genders, ages, and more? Many attributes will be considered sensitive. For each of them, we should validate our approach. Do we collect enough samples from each subgroup? Do we reinforce pre-existing biases (like sampling subgroup A from a low-income group and subgroup B from a high-income group instead of diverse sampling)?
Consider language diversity. When creating a speech-to-text system, do we use recordings from native speakers only? Do we consider regional accents? Foreigners with different fluency levels? Various types of voices (quiet, loud, fast/slow, high/low pitch, etc.)?
Get hands-on with 1200+ tech skills courses.