A new tool helps us understand what an AI is actually thinking
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird. What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks make decisions. Applied to image classification, it lets a person visualize how the network develops its understanding of what is, for instance, a kitten or a Labrador. The visualizations, above, are ... strange. Why it matters: Deep learning is powerful--but opaque.
May-11-2019, 05:57:48 GMT
- AI-Alerts:
- 2019 > 2019-05 > AAAI AI-Alert for May 14, 2019 (1.00)
- Country:
- North America > Canada > Newfoundland and Labrador > Labrador (0.28)
- Technology: