Interview: Max Tegmark on Superintelligent AI, Cosmic Apocalypse, and Life 3.0
IEEE Spectrum: Last Friday you had a discussion about AI with Yann LeCun, one of the most important computer scientists working on AI. LeCun said that since we don't know what form a superintelligent AI would take, it's premature to start researching safety mechanisms to control it. Max Tegmark: Just because we don't know quite what will go wrong doesn't mean we shouldn't think about it. That's the basic idea of safety engineering: You think hard about what might go wrong to prevent it from happening. But when the leaders of the Apollo program carefully thought through everything that could go wrong when you sent a rocket with astronauts to the moon, they weren't being alarmist. They were doing precisely what ultimately led to the success of the mission.
Sep-14-2017, 21:30:03 GMT
- Country:
- North America > Puerto Rico (0.05)
- Technology:
- Information Technology > Artificial Intelligence > Robots (1.00)