Toward Explainable Deep Learning

Communications of the ACM 

Deep learning (DL) models have enjoyed tremendous success across application domains within the broader umbrella of artificial intelligence (AI) technologies. However, their "black-box" nature, coupled with their extensive use across application sectors--including safety-critical and risk-sensitive ones such as healthcare, finance, aerospace, law enforcement, and governance--has elicited an increasing need for explainability, interpretability, and transparency of decision-making in these models.11,14,18,24 With the recent progression of legal and policy frameworks that mandate explaining decisions made by AI-driven systems (for example, the European Union's GDPR Article 15(1)(h) and the Algorithmic Accountability Act of 2019 in the U.S.), explainability has become a cornerstone of responsible AI use and deployment. In the Indian context, NITI Aayog recently released a two-part strategy document on envisioning and operationalizing Responsible AI in India,15,16 which puts significant emphasis on the explainability and transparency of AI models. Explainability of DL models lies at the human-machine interface, and different users may expect different explanations in different contexts.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found