This is when AI's top researchers think artificial general intelligence will be achieved

#artificialintelligence

At the heart of the discipline of artificial intelligence is the idea that one day we'll be able to build a machine that's as smart as a human. Such a system is often referred to as an artificial general intelligence, or AGI, which is a name that distinguishes the concept from the broader field of study. It also makes it clear that true AI possesses intelligence that is both broad and adaptable. To date, we've built countless systems that are superhuman at specific tasks, but none that can match a rat when it comes to general brain power. But despite the centrality of this idea to the field of AI, there's little agreement among researchers as to when this feat might actually be achievable.


The case for taking AI seriously as a threat to humanity

#artificialintelligence

Stephen Hawking has said, "The development of full artificial intelligence could spell the end of the human race." Elon Musk claims that AI is humanity's "biggest existential threat." That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth. This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don't know. Some of them think advanced AI is so distant that there's no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.


When Will We Reach the Singularity? – A Timeline Consensus from AI Researchers Emerj

#artificialintelligence

AI is applicable in a wide variety of areas--everything from agriculture to cybersecurity. However, most of our work has been on the short-term impact of AI in business. We're not talking about next quarter, or even next year, but in the decades to come. As AI becomes more powerful, we expect it to have a larger impact on our world, including your organization. So, we decided to do what we do best: a deep analysis of AI applications and implications.


The Doomsday Invention

#artificialintelligence

Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford. Titled "Superintelligence: Paths, Dangers, Strategies," it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology--even nuclear weapons--and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an "intelligence explosion," a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude. Such a system would effectively be a new kind of life, and Bostrom's fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor. He sometimes notes, as a point of comparison, the trajectories ...


The Doomsday Invention

#artificialintelligence

Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford. Titled "Superintelligence: Paths, Dangers, Strategies," it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology--even nuclear weapons--and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an "intelligence explosion," a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude. Such a system would effectively be a new kind of life, and Bostrom's fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor. He sometimes notes, as a point of comparison, the trajectories ...