Many traditional philosophical questions take new twists in the context of intelligent machines. For example: What is a mind? What is consciousness? Where do we draw the line on responsibility for actions when dealing with robots, computers, programming? Do human beings occupy a privileged place in the universe?
A call-to-arms about the broken nature of artificial intelligence, and the powerful corporations that are turning the human-machine relationship on its head. We like to think that we are in control of the future of "artificial" intelligence. The reality, though, is that we--the everyday people whose data powers AI--aren't actually in control of anything. When, for example, we speak with Alexa, we contribute that data to a system we can't see and have no input into--one largely free from regulation or oversight. The big nine corporations--Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple--are the new gods of AI and are short-changing our futures to reap immediate financial gain.
This contribution examines two radically different explanations of our phenomenal intuitions, one reductive and one strongly non-reductive, and identifies two germane ideas that could benefit many other theories of consciousness. Firstly, the ability of sophisticated agent architectures with a purely physical implementation to support certain functional forms of qualia or proto-qualia appears to entail the possibility of machine consciousness with qualia, not only for reductive theories but also for the nonreductive ones that regard consciousness as ubiquitous in Nature. Secondly, analysis of introspective psychological material seems to hint that, under the threshold of our ordinary waking awareness, there exist further'submerged' or'subliminal' layers of consciousness which constitute a hidden foundation and support and another source of our phenomenal intuitions. These'submerged' layers might help explain certain puzzling phenomena concerning subliminal perception, such as the apparently'unconscious' multisensory integration and learning of subliminal stimuli. As a researcher in intelligent technologies, I have long been interested in scholarly debates about consciousness.
Humans have learned to travel through space, eradicate diseases and understand nature at the breathtakingly tiny level of fundamental particles. Yet we have no idea how consciousness – our ability to experience and learn about the world in this way and report it to others – arises in the brain. In fact, while scientists have been preoccupied with understanding consciousness for centuries, it remains one of the most important unanswered questions of modern neuroscience. Now our new study, published in Science Advances, sheds light on the mystery by uncovering networks in the brain that are at work when we are conscious. Humans have learned to travel through space, eradicate diseases and understand nature at the breathtakingly tiny level of fundamental particles.
It's hard to avoid the prominence of AI in our lives, and there is a plethora of predictions about how it will influence our future. In their new book Solomon's Code: Humanity in a World of Thinking Machines, co-authors Olaf Groth, Professor of Strategy, Innovation and Economics at HULT International Business School and CEO of advisory network Cambrian.ai, I caught up with the authors about how the continued integration between technology and humans, and their call for a "Digital Magna Carta," a broadly-accepted charter developed by a multi-stakeholder congress that would help guide the development of advanced technologies to harness their power for the benefit of all humanity. Lisa Kay Solomon: Your new book, Solomon's Code, explores artificial intelligence and its broader human, ethical, and societal implications that all leaders need to consider. AI is a technology that's been in development for decades.
Doing this requires a consistent terminology that makes principled distinctions, and that allows clear operationalization of the different concepts that a science of emotion will use." The study of emotion and its subjective partner feelings is an instance of the brain/mind problem and the need for a consistent terminology extends to the general case considered here. There are many other disciplines and practices concerned with the human mind and the foundations for scientific study are not always clear. For example, a statement that a spider or a robot has a mind could be a scientific or a definitional assertion.
Why call it machine learning when it has nothing to do with thinking machines? Machine learning may not have much to do with thinking machines today, but that wasn't always the case. It's important to realize that when the terms "Machine Learning" and "Artificial Intelligence" were coined in the late 50s, the goal was very much to create thinking machines. They not only thought what we now call "Artificial General Intelligence" was possible, but that it was bound to happen within a matter of decades if not years. And furthermore, they thought that the various things they were exploring -- like logical programming and machine learning -- would be enough to get them there.
In an earlier blog article I wrote about how human intelligence differs from artificial intelligence, namely human intelligence is general intelligence while artificial intelligence is specialized intelligence. The article provides "food for thought" for those who fear technology evolution, and specifically AI. In today's article I offer more reflections on the evolution of AI. Put in simple words, AI is about Thinking Machines. The English computer scientist Alan Turing was the first academic who proposed to consider the question "Can machines think?" in 1950.
This sounds like easily-dismissible bunkum, but as traditional attempts to explain consciousness continue to fail, the "panpsychist" view is increasingly being taken seriously by credible philosophers, neuroscientists, and physicists, including figures such as neuroscientist Christof Koch and physicist Roger Penrose. "Why should we think common sense is a good guide to what the universe is like?" says Philip Goff, a philosophy professor at Central European University in Budapest, Hungary. "Einstein tells us weird things about the nature of time that counters common sense; quantum mechanics runs counter to common sense. David Chalmers, a philosophy of mind professor at New York University, laid out the "hard problem of consciousness" in 1995, demonstrating that there was still no answer to the question of what causes consciousness. Traditionally, two dominant perspectives, materialism and dualism, have provided a framework for solving this problem.
An executive guide to the technology and market drivers behind the $135 billion robotics market. Artist Stephanie Dinkins tells a fascinating story about her work with an AI robot made to look like an African-American woman and at times sensing some type of consciousness in the machine. She was speaking at the de Young Museum's Thinking Machines conversation series, along with anthropologist Tobias Rees, Director of Transformation with the Humans Program at the American Institute, Dinkins is Associate Professor of Art at Stony Brook University and her work includes teaching communities about AI and algorithms, and trying to answer questions such as: Can a community trust AI systems they did not create? She has worked with pre-college students in poor neighborhoods in Brooklyn and taught them how to create AI chat bots. They made a chat bot that told "Yo Mamma" jokes - which she said was a success because it showed how AI can be made to reflect local traditions.