Why companies need artificial intelligence explainability

#artificialintelligence 

These programs also need to be integrated into an organization, and stakeholders -- particularly employees and customers -- need to trust that the AI program is accurate and trustworthy. This is the case for building enterprisewide artificial intelligence explainability, according to a new research briefing by Ida Someh, Barbara Wixom, and Cynthia Beath of the MIT Center for Information Systems Research. The researchers define artificial intelligence explainability as "the ability to manage AI initiatives in ways that ensure models are value-generating, compliant, representative, and reliable." Because artificial intelligence is still relatively new, there isn't an extensive list of proven use cases. Leaders are often uncertain if and how their company will see returns from AI programs.