Analysing Fairness in Machine Learning (with Python)
It is no longer enough to build models that make accurate predictions. We also need to make sure that those predictions are fair. Doing so will reduce the harm of biased predictions. As a result, you will go a long way in building trust in your AI systems. To correct bias we need to start by analysing fairness in data and models. You can see a summary of the approaches we will cover below. Understanding why a model is unfair is more complicated. This is why we will first do an exploratory fairness analysis. This will help you identify potential sources of bias before you start modelling. We will then move on to measuring fairness. This is done by applying different definitions of fairness. We will discuss the theory behind these approaches. Along the way, we will also be applying them using Python. We will discuss key pieces of code and you can find the full project on GitHub. You should still be able to follow the article even if you do not want to use the Python code.
May-4-2022, 01:40:16 GMT