Research Highlights: Using Theory of Mind to improve Human Trust in Artificial Intelligence - insideBIGDATA
Artificial Intelligence (AI) systems are threaded throughout modern society, informing us in low-risk interactions such as movie recommendations and chatbots to high-risk environments like medical diagnosis, self-driving cars, drones, and military operations. But is remains a significant challenge to develop human trust in these systems, particularly because the systems themselves cannot explain in a way graspable to humans how a recommendation or decision was reached. This lack of trust can become problematic in critical situations involving finances or healthcare where AI decisions can have life-altering consequences. To address this issue, eXplainable Artificial Intelligence (XAI) has become an active research area both for scientists and industry. XAI develops models using explanations that aim to shed light on the underlying mechanisms of AI systems, thus bringing transparency to the process.
Mar-5-2022, 13:50:10 GMT