Giving an AI control of nuclear weapons: What could possibly go wrong? - Bulletin of the Atomic Scientists

#artificialintelligence 

If artificial intelligences controlled nuclear weapons, all of us could be dead. In 1983, Soviet Air Defense Forces Lieutenant Colonel Stanislav Petrov was monitoring nuclear early warning systems, when the computer concluded with the highest confidence that the United States had launched a nuclear war. But Petrov was doubtful: The computer estimated only a handful of nuclear weapons were incoming, when such a surprise attack would more plausibly entail an overwhelming first strike. He also didn't trust the new launch detection system, and the radar system didn't have corroborative evidence. Petrov decided the message was a false positive and did nothing. The computer was wrong; Petrov was right.