Most Health Organizations Can't Ensure Responsible AI Use - InformationWeek

#artificialintelligence 

Despite a growing interest in artificial intelligence, most healthcare organizations still lack the tools necessary to ensure responsible use of such technologies, finds a report from Accenture Health. According to the report, Digital Health Technology Vision 2018, 81% of healthcare executives said they are not yet prepared to face the societal and liability issues needed to explain their AI systems' decisions. Additionally, while 86% of respondents said that their organizations are using data to drive automated decision-making, the same proportion (86%) report they have not invested in the capabilities needed to verify data sources across their most critical systems. Kaveh Safavi, head of Accenture's health practice, observed that the current lack of AI data verification investment activity is exposing healthcare organizations to inaccurate, manipulated and biased data that can lead to corrupted insights and skewed results. "The 86% figure is critical," he stated, "given that 24% of executives also said that they have been the target of adversarial AI behaviors, such as falsified location data or bot fraud on more than one occasion." On a positive note, the study found that 73% of respondents plan to develop internal ethical standards for AI to ensure that their systems act responsibly.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found