How do we know AI is ready to be in the wild? Maybe a critic is needed
Mischief can happen when AI is let loose in the world, just like any technology. The examples of AI gone wrong are numerous, the most vivid in recent memory being the disastrously bad performance of Amazon's facial recognition technology, Rekognition, which had a propensity to erroneously match members of some ethnic groups with criminal mugshots to a disproportionate extent. Given the risk, how can society know if a technology has been adequately refined to a level where it is safe to deploy? "This is a really good question, and one we are actively working on," Sergey Levine, assistant professor with the University of California at Berkeley's department of electrical engineering and computer science, told ZDNet by email this week. Levine and colleagues have been working on an approach to machine learning where the decisions of a software program are subjected to a critique by another algorithm within the same program that acts adversarially.
Nov-27-2021, 16:58:16 GMT
- Country:
- North America > United States > California (0.25)
- Industry:
- Information Technology (0.35)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Reinforcement Learning (0.45)
- Robots (1.00)
- Vision > Face Recognition (0.55)
- Information Technology > Artificial Intelligence