Explainable AI - AI Summary

#artificialintelligence 

More than two dozen artificial intelligence experts from business and academia, including Texas McCombs, explored the importance of understanding how machine learning systems arrive at their conclusions so humans can trust those results. Although AI is more than 50 years old, "deep learning has been a mini-scientific revolution" since the 2010s, said one keynote speaker, Charles Elkan, a professor of computer science at the University of California, San Diego. Alice Xiang, a lawyer and a senior research scientist for Sony Group, said, "I see explainability as an important part of providing transparency and, in turn, enabling accountability." She noted the challenge of black boxes, citing as examples drug-sniffing dogs, whose abilities are mysterious but highly accurate, and the horse Clever Hans, who appeared to understand math but was really following cues from its owner. In a panel discussion called "Adopting AI," James Guszcza, a behavioral research affiliate at Stanford University and chief data scientist on leave from Deloitte LLP, said: "I think one of the previous speakers said we need to be interdisciplinary; I take it a little bit further and say we need to be transdisciplinary."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found