If we want AI to explain itself, here's how it should tell us
Testing the best: There's only one way to figure that out: ask some users. So that's what researchers from Harvard and Google Brain did, in a series of studies. Test subjects looked at different combinations of inputs, outputs, and explanations around a machine learning algorithm that was designed to learn the dietary habits or medical conditions of an alien (Yes, seriously--alien life was chosen to avoid the test subject's own biases creeping in). Users then scored the different combinations. Keep it short: Longer explanations were found to be more difficult to parse than shorter ones--though breaking up the same amount of text into many short lines was somehow better than making people read a few longer lines.
Jan-2-2022, 00:40:24 GMT
- Country:
- Asia
- China (0.09)
- India (0.05)
- North Korea (0.05)
- Russia (0.50)
- Europe
- North America > United States (1.00)
- Asia
- Genre:
- Research Report (0.51)
- Industry:
- Technology: