The limitations of AI safety tools
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. In 2019, OpenAI released Safety Gym, a suite of tools for developing AI models that respects certain "safety constraints." At the time, OpenAI claimed that Safety Gym could be used to compare the safety of algorithms and the extent to which those algorithms avoid making harmful mistakes while learning. Since then, Safety Gym has been used in measuring the performance of proposed algorithms from OpenAI as well as researchers from the University of California, Berkeley and the University of Toronto. But some experts question whether AI "safety tools" are as effective as their creators purport them to be -- or whether they make AI systems safer in any sense. "OpenAI's Safety Gym doesn't feel like'ethics washing' so much as maybe wishful thinking," Mike Cook, an AI researcher at Queen Mary University of London, told VentureBeat via email.
Sep-28-2021, 20:20:24 GMT
- Country:
- North America
- Canada > Ontario
- Toronto (0.55)
- United States > California
- Alameda County > Berkeley (0.25)
- Canada > Ontario
- North America
- Industry:
- Information Technology (0.32)
- Technology: