Even experts are too quick to rely on AI explanations, study finds
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. As AI systems increasingly inform decision-making in health care, finance, law, and criminal justice, they need to provide justifications for their behavior that humans can understand. The field of "explainable AI" has gained momentum as regulators turn a critical eye toward black-box AI systems -- and their creators. But how a person's background can shape perceptions of AI explanations is a question that remains underexplored. A new study coauthored by researchers at Cornell University, IBM, and the Georgia Institute of Technology aims to shed light on the intersection of interpretability and explainable AI.
Aug-26-2021, 00:45:19 GMT