Goto

Collaborating Authors

Welcome, Robot Overlords. Please Don't Fire Us?

#artificialintelligence

This is a story about the future. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce--beachfront property in Malibu, original Rembrandts--but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. Maybe you think I'm pulling your leg here.


The Ultimate Ai Glossary

#artificialintelligence

Explain: Many of the fears around AI stem from the possible job loss caused by the automation in industries such as manufacturing. However, automation is also at the heart of one of the most exciting and tangible AI products, driverless vehicles. An automated system can run without the help of a human but that does not make it artificially intelligent. An AI-powered automated system would not only be able to make decisions without a human but would be able to learn from those decisions and alter their action as a result.


Rise of the Robots--The Future of Artificial Intelligence

#artificialintelligence

Editor's Note: This article was originally printed in the 2008 Scientific American Special Report on Robots. It is being published on the Web as part of ScientificAmerican.com's In recent years the mushrooming power, functionality and ubiquity of computers and the Internet have outstripped early forecasts about technology's rate of advancement and usefulness in everyday life. Alert pundits now foresee a world saturated with powerful computer chips, which will increasingly insinuate themselves into our gadgets, dwellings, apparel and even our bodies. Yet a closely related goal has remained stubbornly elusive. In stark contrast to the largely unanticipated explosion of computers into the mainstream, the entire endeavor of robotics has failed rather completely to live up to the predictions of the 1950s. In those days experts who were dazzled by the seemingly miraculous calculational ability of computers thought that if only the right software were written, computers could become the articial brains of sophisticated autonomous robots. Within a decade or two, they believed, such robots would be cleaning our oors, mowing our lawns and, in general, eliminating drudgery from our lives.


Designing AI Systems that Obey Our Laws and Values

#artificialintelligence

Operational AI systems (for example, self-driving cars) need to obey both the law of the land and our values. We propose AI oversight systems ("AI Guardians") as an approach to addressing this challenge, and to respond to the potential risks associated with increasingly autonomous AI systems.a These AI oversight systems serve to verify that operational systems did not stray unduly from the guidelines of their programmers and to bring them back in compliance if they do stray. The introduction of such second-order, oversight systems is not meant to suggest strict, powerful, or rigid (from here on'strong') controls. Operations systems need a great degree of latitude in order to follow the lessons of their learning from additional data mining and experience and to be able to render at least semi-autonomous decisions (more about this later).


Without a 'world government' technology will destroy us, says Stephen Hawking

The Independent - Tech

Stephen Hawking has warned that technology needs to be controlled in order to prevent it from destroying the human race. The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we need to establish a way of identifying threats quickly, before they have a chance to escalate. "Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages," he told The Times. "It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war.