Goto

Collaborating Authors

Are We Smart Enough to Control Artificial Intelligence?

#artificialintelligence

Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. "Don't laugh at me," he said, "but I was counting on the singularity."


Our Fear of Artificial Intelligence

#artificialintelligence

Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. "Don't laugh at me," he said, "but I was counting on the singularity." My friend worked in technology; he'd seen the changes that faster microprocessors and networks had wrought.


The Ethical Questions Behind Artificial Intelligence - Future of Life Institute

#artificialintelligence

What do philosophers and ethicists worry about when they consider the long-term future of artificial intelligence? Well, to start, though most people involved in the field of artificial intelligence are excited about its development, many worry that without proper planning an advanced AI could destroy all of humanity. And no, this does not mean they're worried about Skynet. At a recent NYU conference, the Ethics of Artificial Intelligence, Eliezer Yudkowsky from the Machine Intelligence Research Institute explained that AI run amok was less likely to look like the Terminator and more likely to resemble the overeager broom that Mickey Mouse brings to life in the Sorcerer's Apprentice in Fantasia. The broom has one goal and not only does it remain focused, regardless of what Mickey does, it multiplies itself and becomes even more efficient.


Future of Humanity Institute

AITopics Original Links

Led by Founding Director Prof. Nick Bostrom, the Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. It enables a select set of leading intellects to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects. The Future of Humanity Institute's mission is to shed light on crucial considerations for humanity's future. We seek to focus our work where we can make the greatest positive difference. Prof. Nick Bostrom's New York Times best seller Superintelligence: Paths, Dangers, Strategies, provides an introduction to our work on the long-term implications of artificial intelligence.


Are We Smart Enough to Control Artificial Intelligence?

#artificialintelligence

Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. "Don't laugh at me," he said, "but I was counting on the singularity." My friend worked in technology; he'd seen the changes that faster microprocessors and networks had wrought.