Error-riddled data sets are warping our sense of how good AI really is
Yes, but: In recent years, studies have found that these data sets can contain serious flaws. ImageNet, for example, contains racist and sexist labels as well as photos of people's faces obtained without consent. The latest study now looks at another problem: many of the labels are just flat-out wrong. A mushroom is labeled a spoon, a frog is labeled a cat, and a high note from Ariana Grande is labeled a whistle. The ImageNet test set has an estimated label error rate of 5.8%.
Apr-1-2021, 14:34:39 GMT
- AI-Alerts:
- 2021 > 2021-04 > AAAI AI-Alert for Apr 6, 2021 (1.00)
- Genre:
- Research Report (0.38)
- Technology: