FATE in AI: Towards Algorithmic Inclusivity and Accessibility

Inuwa-Dutse, Isa

arXiv.org Artificial Intelligence 

Examples of bias and discrimination in AI applications include court decisions [1], job hiring [2], online ads [3], and many other areas prone to bias [4]. These algorithmic decisions have economic and personal implications for individuals. Therefore, Fairness, Accountability, Transparency and Ethics (FATE) in AI must be properly regulated for responsible use cases [5, 6], particularly in high-stakes domains [1, 7, 8, 9, 10, 11, 12]. Studies have shown that machine learning models can discriminate based on race and gender [13, 14, 15]. FATE in AI is intended to address the social issues caused by digital systems, but the current discourse is largely shaped by more economically developed countries (MEDC), raising concerns about neglecting local knowledge, cultural pluralism, and global fairness [16]. As AI systems become more integrated into various products [9, 10, 17, 12, 18, 19], they are a major driver of the fourth industrial revolution (4IR) and transformation [20]. Therefore, it is essential to understand the FATE-related needs of different communities, as AI affects a wide range of people. Ensuring effective transparency cannot be a one-size-fits-all approach [21], as this could disproportionately affect different communities [16, 22]. To this end, more contextualised and interdisciplinary research is needed to inform algorithmic fairness and transparency [23, 24, 25].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found