How Do You Explain That Machines Won't Really Think Like People?

#artificialintelligence

You could try a subtle approach, as one Oxford researcher did. He proposes a different model from "artificial general intelligence" (think like people). Rather, he introduces "Comprehensive AI Services" (CAIS), relying on the work of Eric Drexler, author of Engines of Creation: Instead of relying on some unforeseen breakthrough, the CAIS model of AI just assumes that specialized, narrow AI will continue to improve at performing each of its tasks, and the range of tasks that machine learning algorithms will be able to perform will become wider. Ultimately, once a sufficient number of tasks have been automated, the services that an AI will provide will be so comprehensive that they will resemble a general intelligence. One could then imagine a "general" intelligence as simply an algorithm that is extremely good at matching the task you ask it to perform to the specialized service algorithm that can perform that task.


Why not all forms of artificial intelligence are equally scary

#artificialintelligence

How worried should we be about artificial intelligence? Recently, I asked a number of AI researchers this question. The responses I received vary considerably; it turns out there is not much agreement about the risks or implications. Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that "artificial intelligence" is an ambiguous term.


Why not all forms of artificial intelligence are equally scary

#artificialintelligence

Recently, I asked a number of AI researchers this question. The responses i received vary considerably; it turns out there is not much agreement about the risks or implications. Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that "artificial intelligence" is an ambiguous term. By AI one can mean a Roomba vacuum cleaner, a self-driving truck, or one of those death-dealing Terminator robots.


The Electric Turing Acid Test

#artificialintelligence

Parts of this essay by Andrew Smart are adapted from his book Beyond Zero And One (2015), published by OR Books. Machine intelligence is growing at an increasingly rapid pace. The leading minds on the cutting edge of AI research think that machines with human-level intelligence will likely be realized by the year 2100. Beyond this, artificial intelligences that far outstrip human intelligence would rapidly be created by the human-level AIs. This vastly superhuman AI will result from an "intelligence explosion."


AI Dangers

Communications of the ACM

In January 2015, a host of prominent figures in high tech and science and experts in artificial intelligence (AI) published a piece called "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter," calling for research on the societal impacts of AI. Unfortunately, the media grossly distorted and hyped the original formulation into doomsday scenarios. Nonetheless, some thinkers do warn of serious dangers posed by AI, tacitly invoking the notion of a Technological Singularity (first suggested by Good8) to ground their fears. According to this idea, computational machines will improve in competence at an exponential rate. They will reach the point where they correct their own defects and program themselves to produce artificial superintelligent agents that far surpass human capabilities in virtually every cognitive domain.