Is It Enough to Get the Behaviour Right?

AAAI Conferences

This paper deals with the relationship between intelligent behaviour, on the   one hand, and the mental qualities needed to produce it, on the other.  We   consider two well-known opposing positions on this issue: one due to Alan   Turing and one due to John Searle (via the Chinese Room).  In particular, we   argue against Searle, showing that his answer to the so-called System Reply   does not work.  The argument takes a novel form:   we shift the debate to a different and more plausible room where the   required conversational behaviour is much easier to characterize and to   analyze.  Despite being much simpler than the Chinese Room, we show that    the  behaviour there is still complex enough that it cannot be produced without   appropriate mental qualities.


What is artificial intelligence? (Or, can machines think?)

Robohub

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils. I start the keynote with Alan Turing's famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin's Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we've had computers that can expertly play chess for 20 years, but we can't yet build a robot that could go into your kitchen and make you a cup of tea. In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence.


The combination of human and artificial intelligence will define humanity's future

#artificialintelligence

Bryan Johnson is the founder and chief executive officer of the neuroprosthesis developer Kernel and the founder of OS Fund and Braintree. Through the past few decades of summer blockbuster movies and Silicon Valley products, artificial intelligence (AI) has become increasingly familiar and sexy, and imbued with a perversely dystopian allure. What's talked about less, and has also been dwarfed in attention and resources, is human intelligence (HI). In its varied forms -- from the mysterious brains of octopuses and the swarm-minds of ants to Go-playing deep learning machines and driverless-car autopilots -- intelligence is the most powerful and precious resource in existence. Our own minds are the most familiar examples of a phenomenon characterized by a great deal of diversity.


The First Law of Robotics (a call to arms)

AAAI Conferences

Even before the advent of Artificial Intelligence, science fiction writer Isaac Asimov recognized that a robot must place the protection of humans from harm at a higher priority than obeying human orders. Inspired by Asimov, we pose the following fundamental questions: (1) How should one formalize the rich, but informal, notion of "harm"? While we address some of these questions in technical detail, the primary goal of this paper is to focus attention on Asimov's concern: society will reject autonomous agents unless we have some credible means of making them safe! A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. Isaac Asimov [, ]: In 1940, Isaac Asimov stated the First Law of Robotics, capturing an essential insight: a robot should not slavishly obey human commands -- its foremost goal should be to avoid harming humans. Consider the following scenarios: *We thank Steve Hanks, Nick Kushmerick, Neat Lesh, and Kevin Sullivan for helpful discussions. This research was funded in part by Office of Naval Research Grants 90-J-1904 and 92-J-1946, and by National Science Foundation Grants IRI-8957302, IRI-9211045, and IRI-9357772.


Elon Musk's Neuralink is not about preventing an AI apocalypse

#artificialintelligence

When news hit yesterday that serial entrepreneur and futurist Elon Musk was investing in a brain chip venture called Neuralink, the billionaire's fan club went wild. Theories ranged from this being Musk's bold plan to forestall an AI apocalypse to more measured responses about this being a promising contribution toward curing neurodegenerative diseases. Some also, only half-jokingly it seems, accused Musk of having played a little too much Mass Effect: Andromeda over the weekend. But to unpack Neuralink's purported goal a little further, you have to understand what is and is not currently possible in the realm of neuroscience, and why Silicon Valley is putting more time, money, and energy into exploring cognitive enhancement. Neuralink isn't the first company to look into what are called brain-computer interfaces, however it is perhaps the most high profile now that Musk's name is attached.