AI Risk Skepticism

Yampolskiy, Roman V.

arXiv.org Artificial Intelligence 

It has been predicted that if recent advancement in machine learning continue uninterrupted, human-level or even superintelligent Artificially Intelligent (AI) systems will be designed at some point in the near future [1]. Currently available (and near-term predicted) AI software is subhuman in its general intelligence capability but it is already capable of being hazardous in a number of narrow domains [2], mostly with regard to privacy, discrimination [3, 4], crime automation or armed conflict [5]. Superintelligent AI, predicted to be developed in the longer term, is widely anticipated [6] to be far more dangerous and is potentially capable of causing a lot of harm including an existential risk event for the humanity as a whole [7, 8]. Together the short-term and long-term concerns are known as AI Risk [9]. An infinite number of pathways exists to a state of the world in which a dangerous AI is unleashed [10].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found