Goto

Collaborating Authors

 superintelligent


OpenAI co-founder warns 'superintelligent' AI must be controlled to prevent possible human extinction

FOX News

American Accountability Foundation spokesman Robert Donachie says the left is trying to use AI to'push their agenda on the American people.' A co-founder of artificial intelligence leader OpenAI is warning that superintelligence must be controlled in order to prevent the extinction of the human race. "Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world's most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction," Ilya Sutskever and head of alignment Jan Leike wrote in a Tuesday blog post, saying they believe such advancements could arrive as soon as this decade. They said managing such risks would require new institutions for governance and solving the problem of superintelligence alignment: ensuring AI systems much smarter than humans "follow human intent."


Why I'm Not Worried About A.I. Killing Everyone and Taking Over the World

Slate

This article was co-published with Understanding AI, a newsletter that explores how A.I. works and how it's changing our world. Geoffrey Hinton is a legendary computer scientist whose work laid the foundation for today's artificial intelligence technology. He was a co-author of two of the most influential A.I. papers: a 1986 paper describing a foundational technique (called backpropagation) that is still used to train deep neural networks and a 2012 paper demonstrating that deep neural networks could be shockingly good at recognizing images. That 2012 paper helped to spark the deep learning boom of the last decade. Google hired the paper's authors in 2013 and Hinton has been helping Google develop its A.I. technology ever since then. But last week Hinton quit Google so he could speak freely about his fears that A.I. systems would soon become smarter than us and gain the power to enslave or kill us. "There are very few examples of a more intelligent thing being controlled by a less intelligent thing," Hinton said in an interview on CNN last week.


The Real Threat From A.I. Isn't Superintelligence. It's Gullibility.

Slate

The rapid rise of artificial intelligence over the past few decades, from pipe dream to reality, has been staggering. A.I. programs have long been chess and Jeopardy! Champions, but they have also conquered poker, crossword puzzles, Go, and even protein folding. They power the social media, video, and search sites we all use daily, and very recently they have leaped into a realm previously thought unimaginable for computers: artistic creativity. Given this meteoric ascent, it's not surprising that there are continued warnings of a bleak Terminator-style future of humanity destroyed by superintelligent A.I.s that we unwittingly unleash upon ourselves.


Driving as Proxy for Human Nature

#artificialintelligence

Introduction Is man born free but everywhere is in chains, as Rousseau would claim? Or is life nasty, brutish, and short, as Hobbes would do the same? These questions come down to views on human nature. Thomas Sowell contends there is a conflict of views between the unconstrained and constrained visions. Here, we have driving as a sort of natural experiment to test human nature.


Will The Next AI Be Superintelligent?

#artificialintelligence

In 2005, Ray Kurzweil said, "the singularity is near." Now, AI can code in any language, and we're moving to way better AI. GPT-3 got "mindboggling" results by training on a ton of data: Basically the whole Internet. It doesn't need to train on your specific use-case (zero-shot learning). It can fool 88% of people, and we're still in the baby stage.


Superintelligent, Amoral, and Out of Control - Issue 84: Outbreak

Nautilus

In the summer of 1956, a small group of mathematicians and computer scientists gathered at Dartmouth College to embark on the grand project of designing intelligent machines. The ultimate goal, as they saw it, was to build machines rivaling human intelligence. As the decades passed and AI became an established field, it lowered its sights. There were great successes in logic, reasoning, and game-playing, but stubborn progress in areas like vision and fine motor-control. This led many AI researchers to abandon their earlier goals of fully general intelligence, and focus instead on solving specific problems with specialized methods.


We Shouldn’t be Scared by ‘Superintelligent A.I.’

#artificialintelligence

Current discussions of superhuman artificial intelligence are plagued by flawed intuitions about the nature of intelligence. Intelligent machines catastrophically misinterpreting human desires is a frequent trope in science fiction, perhaps used most memorably in Isaac Asimov's stories of robots that misconstrue the famous "three laws of robotics." The idea of artificial intelligence going awry resonates with human fears about technology. But current discussions of superhuman A.I. are plagued by flawed intuitions about the nature of intelligence. We don't need to go back all the way to Isaac Asimov -- there are plenty of recent examples of this kind of fear.


Opinion We Shouldn't be Scared by 'Superintelligent A.I.'

#artificialintelligence

Intelligent machines catastrophically misinterpreting human desires is a frequent trope in science fiction, perhaps used most memorably in Isaac Asimov's stories of robots that misconstrue the famous "three laws of robotics." The idea of artificial intelligence going awry resonates with human fears about technology. But current discussions of superhuman A.I. are plagued by flawed intuitions about the nature of intelligence. We don't need to go back all the way to Isaac Asimov -- there are plenty of recent examples of this kind of fear. Take an Op-Ed in The New York Times and a new book, "Human Compatible," by the computer scientist Stuart Russell.


A.I. Is the Cause Of -- And Solution To -- the End of the World

#artificialintelligence

Asteroids, supervolcanoes, nuclear war, climate change, engineered viruses, artificial intelligence, and even aliens -- the end may be closer than you think. For the next two weeks, OneZero will be featuring essays drawn from editor Bryan Walsh's forthcoming book End Times: A Brief Guide to the End of the World, which hits shelves on August 27 and is available for pre-order now, as well as pieces by other experts in the burgeoning field of existential risk. It's up to us to postpone the apocalypse. There is no easy definition for artificial intelligence, or A.I. Scientists can't agree on what constitutes "true A.I." versus what might simply be a very effective and fast computer program. But here's a shot: intelligence is the ability to perceive one's environment accurately and take actions that maximize the probability of achieving given objectives.


Less Like Us: An Alternate Theory of Artificial General Intelligence

#artificialintelligence

The question of whether an artificial general intelligence will be developed in the future--and, if so, when it might arrive--is controversial. One (very uncertain) estimate suggests 2070 might be the earliest we could expect to see such technology. Some futurists point to Moore's Law and the increasing capacity of machine learning algorithms to suggest that a more general breakthrough is just around the corner. Others suggest that extrapolating exponential improvements in hardware is unwise, and that creating narrow algorithms that can beat humans at specialized tasks brings us no closer to a "general intelligence." But evolution has produced minds like the human mind at least once.