Black box problem: Humans can't trust AI, US-based Indian scientist feels lack of transparency is the reason

#artificialintelligence 

NEW DELHI: From diagnosing diseases to categorising huskies, Artificial Intelligence has countless uses but mistrust in the technology and its solutions will persist until people, the "end users", can fully understand all its processes, says a US-based Indian scientist. Overcoming the "lack of transparency" in the way AI processes information - popularly called the "black box problem" - is crucial for people to develop trust in the technology, said Sambit Bhattacharya who teaches Computer Science at the Fayetteville State University "Trust is a major issue with Artificial Intelligence because people are the end-users, and they can never have full trust in it if they do not know how AI processes information," Bhattacharya told . The computer scientist, whose work includes using machine learning (ML) and AI to process images, was a keynote speaker at the recent 4th International and 19th National Conference on Machines and Mechanisms (iNaCoMM 2019) at the Indian Institute of Technology in Mandi. To buttress his point that users don't always trust solutions provided by AI, Bhattacharya cited the instance of researchers at Mount Sinai Hospital in the US who applied ML to a large database of patient records containing information such as test results and doctor visits. The'Deep Patient' software they used had exceptional accuracy in predicting disease, discovering patterns hidden in the hospital data indicating when patients were on the way to different ailments, including cancer, according to a 2016 study published in the journal Nature.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found