The quest for explainable AI

#artificialintelligence 

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Artificial intelligence (AI) is highly effective at parsing extreme volumes of data and making decisions based on information that is beyond the limits of human comprehension. But it suffers from one serious flaw: it cannot explain how it arrives at the conclusions it presents, at least, not in a way that most people can understand. This "black box" characteristic is starting to throw some serious kinks in the applications that AI is empowering, particularly in medical, financial and other critical fields, where the "why" of any particular action is often more important than the "what." This is leading to a new field of study called explainable AI (XAI), which seeks to infuse AI algorithms with enough transparency so users outside the realm of data scientists and programmers can double-check their AI's logic to make sure it is operating within the bounds of acceptable reasoning, bias and other factors.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found