Goto

Collaborating Authors

Results


Eliminating AI Bias

#artificialintelligence

The primary purpose of Artificial Intelligence (AI) is to reduce manual labour by using a machine's ability to scan large amounts of data to detect underlying patterns and anomalies in order to save time and raise efficiency. However, AI algorithms are not immune to bias. As AI algorithms can have long-term impacts on an organisation's reputation and severe consequences for the public, it is important to ensure that they are not biased towards a particular subgroup within a population. In layman's terms, algorithmic bias within AI algorithms occurs when the outcome is a lack of fairness or a favouritism towards one group due to a specific categorical distinction, where the categories are ethnicity, age, gender, qualifications, disabilities, and geographic location. If this in-depth educational content is useful for you, subscribe to our AI research mailing list to be alerted when we release new material. AI Bias takes place when assumptions are made incorrectly about the dataset or the model output during the machine learning process, which subsequently leads to unfair results. Bias can occur during the design of the project or in the data collection process that produces output that unfairly represents the population. For example, a survey posted on Facebook asking about people's perceptions of the COVID-19 lockdown in Victoria finds that 90% of Victorians are afraid of travelling interstate and overseas due to the pandemic. This statement is flawed because it is based upon individuals that access social media (i.e., Facebook) only, could include users that are not located in Victoria, and may overrepresent a particular age group (i.e. To effectively identify AI Bias, we need to look for presence of bias across the AI Lifecycle shown in Figure 1.