A Review of the Role of Causality in Developing Trustworthy AI Systems

Ganguly, Niloy, Fazlija, Dren, Badar, Maryam, Fisichella, Marco, Sikdar, Sandipan, Schrader, Johanna, Wallat, Jonas, Rudra, Koustav, Koubarakis, Manolis, Patro, Gourab K., Amri, Wadhah Zai El, Nejdl, Wolfgang

arXiv.org Artificial Intelligence 

As a result, they are often brittle and unable to adapt to new domains, can treat individuals or subgroups unfairly, and have limited ability to explain their actions or recommendations [197, 235] reducing the trust of human users [118]. Following this, a new area of research, trustworthy AI, has recently received much attention from several policymakers and other regulatory organizations. The resulting guidelines (e.g., [184, 186, 187]), introduced to increase trust in AI systems, make developing trustworthy AI not only a technical (research) and social endeavor but also an organizational and (legal) obligational requirement. In this paper, we set out to demonstrate, through an extensive survey, that causal modeling and reasoning is an emerging and very useful tool for enabling current AI systems to become trustworthy. Causality is the science of reasoning about causes and effects. Cause-and-effect relationships are central to how we make sense of the world around us, how we act upon it, and how we respond to changes in our environment. In AI, research in causality was pioneered by the Turing award winner Judea Pearl long back in his 1995 seminal paper [194]. Since then, many researchers have contributed to the development of a solid mathematical basis for causality; see, for example, the books [79, 196, 201], the survey [90] and seminal papers [197, 235].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found