If we want AI to explain itself, here's how it should tell us

#artificialintelligence 

Testing the best: There's only one way to figure that out: ask some users. So that's what researchers from Harvard and Google Brain did, in a series of studies. Test subjects looked at different combinations of inputs, outputs, and explanations around a machine learning algorithm that was designed to learn the dietary habits or medical conditions of an alien (Yes, seriously--alien life was chosen to avoid the test subject's own biases creeping in). Users then scored the different combinations. Keep it short: Longer explanations were found to be more difficult to parse than shorter ones--though breaking up the same amount of text into many short lines was somehow better than making people read a few longer lines.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found