Scaling AI: 3 Reasons Why Explainability Matters

#artificialintelligence 

As artificial intelligence and machine learning-based systems become more ubiquitous in decision-making, should we expect our confidence in the outcomes to remain like that of its human collaborators? When humans make decisions, we're able to rationalize the outcomes through inquiry and conversation around how expert judgment, experience and use of available information led to the decision. To borrow the words of former Secretary of Defense Ash Carter when speaking at a 2019 SXSW panel about post-analysis of an AI-enabled decision, "'the machine did it' won't fly." As we evolve human and machine collaboration, establishing trust, transparency and accountability at the onset of decision support system and algorithm design is paramount. Without it, people may be hesitant to trust AI recommendations because of a lack of transparency into how the machine reached its outcome.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found