This is when AI's top researchers think artificial general intelligence will be achieved

#artificialintelligence

At the heart of the discipline of artificial intelligence is the idea that one day we'll be able to build a machine that's as smart as a human. Such a system is often referred to as an artificial general intelligence, or AGI, which is a name that distinguishes the concept from the broader field of study. It also makes it clear that true AI possesses intelligence that is both broad and adaptable. To date, we've built countless systems that are superhuman at specific tasks, but none that can match a rat when it comes to general brain power. But despite the centrality of this idea to the field of AI, there's little agreement among researchers as to when this feat might actually be achievable.


The case for taking AI seriously as a threat to humanity

#artificialintelligence

Stephen Hawking has said, "The development of full artificial intelligence could spell the end of the human race." Elon Musk claims that AI is humanity's "biggest existential threat." That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth. This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don't know. Some of them think advanced AI is so distant that there's no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.


Can Futurists Predict the Year of the Singularity?

#artificialintelligence

The end of the world as we know it is near. And that's a good thing, according to many of the futurists who are predicting the imminent arrival of what's been called the technological singularity. The technological singularity is the idea that technological progress, particularly in artificial intelligence, will reach a tipping point to where machines are exponentially smarter than humans. It has been a hot topic of late. Well-known futurist and Google engineer Ray Kurzweil (co-founder and chancellor of Singularity University) reiterated his bold prediction at Austin's South by Southwest (SXSW) festival this month that machines will match human intelligence by 2029 (and has said previously the Singularity itself will occur by 2045).


Can Futurists Predict the Year of the Singularity?

#artificialintelligence

The end of the world as we know it is near. And that's a good thing, according to many of the futurists who are predicting the imminent arrival of what's been called the technological singularity. The technological singularity is the idea that technological progress, particularly in artificial intelligence, will reach a tipping point to where machines are exponentially smarter than humans. It has been a hot topic of late. Well-known futurist and Google engineer Ray Kurzweil (co-founder and chancellor of Singularity University) reiterated his bold prediction at Austin's South by Southwest (SXSW) festival this month that machines will match human intelligence by 2029 (and has said previously the Singularity itself will occur by 2045).


Are We Smart Enough to Control Artificial Intelligence?

#artificialintelligence

Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. "Don't laugh at me," he said, "but I was counting on the singularity." My friend worked in technology; he'd seen the changes that faster microprocessors and networks had wrought.