Do explanations for data-based predictions actually increase users' trust in AI?

#artificialintelligence 

In recent years, many artificial intelligence (AI) and robotics researchers have been trying to develop systems that can provide explanations for their actions or predictions. The idea behind their work is that as AI systems become more widespread, explaining why they act in particular ways or why they made certain predictions could increase transparency and consequently users' trust in them. Researchers at Bretagne Atlantique Research Center in Rennes and the French National Center for Scientific Research in Toulouse have recently carried out a study that explores and questions this assumption, with the hope of better understanding how AI explainability may actually impact users' trust in AI. Their paper, published in Nature Machine Intelligence, argues that an AI system's explanations might not actually be as truthful or transparent as some users assume them to be. "This paper originates from our desire to explore an intuitive gap," Erwan Le Merrer and Gilles Trédan, two of the researchers who carried out the study, told TechXplore.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found