How the Future of AI Is Impacted by a Horse from the 1800s

#artificialintelligence 

Artificial intelligence (AI) researchers are unable to explain exactly how deep learning algorithms arrive at their conclusions. Deep learning is complex by nature, but that does not excuse the pursuit of seeking clarity and understanding of black-box decision making. The quality of a machine learning algorithm requires some level of transparency and an understanding of how a decision was made--this impacts the generalizability of the algorithm and the reliability of the output. Recently in March 2019, researchers from the Fraunhofer Heinrich Hertz Institute, Technische Universität Berlin, Singapore University of Technology and Design, Korea University, and Max Planck Institut für Informatik, published in Nature Communications a method of validating the behavior of nonlinear machine learning in order to better assess the quality of the learning system. The research team of Klaus-Robert Müller, Wojciech Samek, Grégoire Montavon, Alexander Binder, Stephan Wäldchen, and Sebastian Lapuschkin discovered that various AI systems using what psychologists would characterize as a "Clever Hans" type of decision-based on correlation.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found