AAAI Conferences

In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types of pathways leading to malevolent AI. Previous relevant work either surveyed specific goals/meta-rules which might lead to malevolent behavior in AIs (Özkural 2014) or reviewed specific undesirable behaviors AGIs can exhibit at different stages of its development (Turchin July 10 2015a, Turchin July 10, 2015b).

Artificial Intelligence Safety and Security (Chapman & Hall/CRC Artificial Intelligence and Robotics Series): Roman V. Yampolskiy: 9780815369820: Amazon.com: Books


"Artificial intelligence: Safety and Security is a timely and ambitious edited volume. It comprises 28 chapters organized under three distinct themes: security, artificial intelligence and safety. Edited by Roman V. Yampolskiy, the contributions are well integrated and challenge common conceptions. Yampolskiy has assembled a diverse team of leading scholars. In sum, the book provides valuable insight into the cyber ecosystem. It can be read in any order without missing the essence of the subject matter, yet the chapters speak to each other. The chapters provide insight into new research areas and experimental designs. The book is a must-read for computer scientists, security experts, mathematicians, students and individuals who are interested in learning more about the progress of the artificial intelligence field. It will also be of interest to hackers and the intelligence community."

Possibilities behind artificial intelligence being explored at UofL


University of Louisville computer engineering professor Roman Yampolskiy is studying artificial intelligence. He says most Americans don't understand and aren't prepared for the takeover of many jobs by robots in the very near future. Many repetitive jobs are already being done by computers or robots. "We're starting to see more intellectual jobs being automated and once we get to the human level, everything goes," Yampolskiy says. "The prediction is, something like, 2045 is the likely time when machines will do the same things most humans do."

Artificial stupidity could help save humanity from an AI takeover

New Scientist

Maybe we need artificial stupidity instead. In order to avoid an apocalyptic scenario where machines take over the world, some are suggesting we should limit AI to human-level intelligence. At least then we'll stand a fighting chance, says Roman Yampolskiy at the University of Louisville, USA.

Prof Roman Yampolskiy Superintelligence is Coming


Want to watch this again later? Sign in to add this video to a playlist. Report Need to report the video? Sign in to report inappropriate content. Report Need to report the video?