Tackling Trust in Machine Learning and Neural Networks: See It to Believe It

#artificialintelligence 

SAN FRANCISCO – Issues of explainability, interpretability, and regulatory compliance all share one thing in common: they contribute to a marked distrust of advanced machine learning and neural networks. Although it's not always easy to understand the various weights and measures that determine the outcomes of these predictive artificial intelligence models, the actions based on their results are usually perfectly clear. By focusing on those actions--such as what decisions models made about options for supply chain management, patient care, or product offers--organizations can not only validate the worth of these techniques, but also develop much needed trust in them. "It's more of a trust issue," admits Geoff Annesley, One Network EVP. "The people that are using [AI platforms], they may be a little cynical when they start out. But what we find is that they quickly start to trust the decisions because they see well, I'm running in parallel here when the decisions are made and it's beating me."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found