DAVID BRIN: How Might Artificial Intelligence Come About?
Those fretfully debating artificial intelligence (AI) might best start by appraising the half dozen general pathways under exploration in laboratories around the world. While these general approaches overlap, they offer distinct implications for what characteristics emerging, synthetic minds might display, including (for example) whether it will be easy or hard to instill human-style ethical values. Most problematic may be those efforts taking place in secret. The "Moore's Law crossing" argument is appraised, in light of discoveries that brain computation may involve much more than just synapses. Will efforts to develop Sympathetic Robotics tweak compassion from humans long before automatons are truly self-aware? It is argued that most foreseeable problems might be dealt with the same way that human versions of oppression and error are best addressed -- via reciprocal accountability. For this to happen, there should be diversity of types, designs and minds, interacting under fair competition in a generally open environment. As varied concepts from science fiction are reified by rapidly advancing technology, some trends are viewed worriedly by our smartest peers. Portions of the intelligencia -- typified by Google's Ray Kurzweil [1] -- foresee AI, or Artificial General Intelligence (AGI) as likely to bring good news, perhaps even transcendence for members of the Olde Race of bio-organic humanity 1.0. Others, such as Stephen Hawking and Francis Fukuyama, warn that the arrival of sapient, or supersapient machinery may bring an end to our species -- or at least its relevance on the cosmic stage -- a potentiality evoked in many a lurid Hollywood film. Swedish philosopher Nicholas Bostrom, in Superintelligence [2], suggests that even advanced AIs who obey their initial, human defined goals will likely generate "instrumental subgoals" such as self-preservation, cognitive enhancement, and resource acquisition. In one nightmare scenario, Bostrom posits an AI that -- ordered to "make paperclips" -- proceeds to overcome all obstacles and transform the solar system into paper clips. A variant on this theme makes up the grand arc in the famed "three laws" robotic series by science fiction author Isaac Asimov [3]. Taking middle ground, SpaceX/Tesla entrepreneur Elon Musk has joined with YCombinator founder Sam Altman to establish OpenAI [4], an endeavor that aims to keep artificial intelligence research -- and its products -- accountable by maximizing transparency and accountability. As one who has promoted those two key words for a quarter of a century, I wholly approve [5].
Jun-12-2017, 01:55:06 GMT
- AI-Alerts:
- 2017 > 2017-06 > AAAI AI-Alert for Jun 14, 2017 (1.00)
- Country:
- Asia
- Europe
- Germany (0.04)
- Russia (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- North America > United States
- California (0.04)
- New York > New York County
- New York City (0.04)
- Industry:
- Information Technology (1.00)
- Leisure & Entertainment (1.00)
- Media > Film (1.00)
- Technology: