How Smart Can AI Get? - Future of Life Institute
Capability Caution Principle: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What will trigger this change? The 23 Asilomar AI Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, "Of course, it's just a start. The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the weekly discussions about previous principles here. One of the greatest questions facing AI researchers is: just how smart and capable can artificial intelligence become? In recent years, the development of AI has accelerated in leaps and bounds. DeepMind's AlphaGo surpassed human performance in the challenging, intricate game of Go, and the company has created AI that can quickly learn to play Atari video games with much greater prowess than a person. We've also seen breakthroughs and progress in language translation, self-driving vehicles, and even the creation of new medicinal molecules. But how much more advanced can AI become? Will it continue to excel only in narrow tasks, or will it develop broader learning skills that will allow a single AI to outperform a human in most tasks? How do we prepare for an AI more intelligent than we can imagine? Some experts think human-level or even super-human AI could be developed in a matter of a couple decades, while some don't think anyone will ever accomplish this feat. The Capability Caution Principle argues that, until we have concrete evidence to confirm what an AI can someday achieve, it's safer to assume that there are no upper limits – that is, for now, anything is possible and we need to plan accordingly. The Capability Caution Principle drew both consensus and disagreement from the experts. While everyone I interviewed generally agreed that we shouldn't assume upper limits for AI, their reasoning varied and some raised concerns. Stefano Ermon, an assistant professor at Stanford and Roman Yampolskiy, an associate professor at the University of Louisville, both took a better-safe-than-sorry approach. Ermon turned to history as a reminder of how difficult future predictions are. He explained, "It's always hard to predict the future.
Feb-22-2017, 05:20:23 GMT
- Industry:
- Leisure & Entertainment > Games > Go (0.55)
- Technology:
- Information Technology > Artificial Intelligence
- Games (0.55)
- Natural Language (0.55)
- Robots > Autonomous Vehicles (0.55)
- Information Technology > Artificial Intelligence