Why employees are more likely to second-guess interpretable algorithms
More and more, workers are presented with algorithms to help them make better decisions. But humans must trust those algorithms to follow their advice. The way humans view algorithmic recommendations varies depending on how much they know about how the model works and how it was created, according to a new research paper co-authored by MIT Sloan professorKate Kellogg. Prior research has assumed that people are more likely to trust interpretable artificial intelligence models, in which they are able to see how the models make their recommendations. But Kellogg and co-researchers Tim DeStefano, Michael Menietti, and Luca Vendraminelli, affiliated with the Laboratory for Innovation Science at Harvard, found that this isn't always true.
Feb-10-2023, 23:15:28 GMT
- Country:
- North America > United States
- Massachusetts > Middlesex County
- Cambridge (0.40)
- New York (0.05)
- Massachusetts > Middlesex County
- North America > United States
- Genre:
- Research Report > New Finding (0.39)
- Technology: