This paper deals with the relationship between intelligent behaviour, on the one hand, and the mental qualities needed to produce it, on the other. We consider two well-known opposing positions on this issue: one due to Alan Turing and one due to John Searle (via the Chinese Room). In particular, we argue against Searle, showing that his answer to the so-called System Reply does not work. The argument takes a novel form: we shift the debate to a different and more plausible room where the required conversational behaviour is much easier to characterize and to analyze. Despite being much simpler than the Chinese Room, we show that the behaviour there is still complex enough that it cannot be produced without appropriate mental qualities.
Bryan Johnson is the founder and chief executive officer of the neuroprosthesis developer Kernel and the founder of OS Fund and Braintree. Through the past few decades of summer blockbuster movies and Silicon Valley products, artificial intelligence (AI) has become increasingly familiar and sexy, and imbued with a perversely dystopian allure. What's talked about less, and has also been dwarfed in attention and resources, is human intelligence (HI). In its varied forms -- from the mysterious brains of octopuses and the swarm-minds of ants to Go-playing deep learning machines and driverless-car autopilots -- intelligence is the most powerful and precious resource in existence. Our own minds are the most familiar examples of a phenomenon characterized by a great deal of diversity.
Gennady Stolyarov II, who is the Chairman of the U.S. Transhumanist Party, served as the interviewer, whereby he asked Kurzweil a number of questions involving Sophie the robot, Watson-based medical diagnostics, and the potential risks of our future. But, of course, no interview involving Kurzweil goes without discussing the pace of AI in relation to human intelligence. Using examples of modern-day AI like AlphaGo, there are clear signs that they're already starting to outpace human intelligence involving specific tasks. This has been a common factor of AI for the last couple decades, starting with the simple goal of defeating the world's best (human) chess players. Today, they're outpacing us in chess, Go, various strategic computer games, and even medical diagnostics.
In an earlier blog article I wrote about how human intelligence differs from artificial intelligence, namely human intelligence is general intelligence while artificial intelligence is specialized intelligence. The article provides "food for thought" for those who fear technology evolution, and specifically AI. In today's article I offer more reflections on the evolution of AI. Put in simple words, AI is about Thinking Machines. The English computer scientist Alan Turing was the first academic who proposed to consider the question "Can machines think?" in 1950.
In its annual report, the AI Now Institute, an interdisciplinary research center studying the societal implications of artificial intelligence, called for a ban on technology designed to recognize people's emotions in certain cases. Specifically, the researchers said affect recognition technology, also called emotion recognition technology, should not be used in decisions that "impact people's lives and access to opportunities," such as hiring decisions or pain assessments, because it is not sufficiently accurate and can lead to biased decisions. What is this technology, which is already being used and marketed, and why is it raising concerns? Researchers have been actively working on computer vision algorithms that can determine the emotions and intent of humans, along with making other inferences, for at least a decade. Facial expression analysis has been around since at least 2003.