Many traditional philosophical questions take new twists in the context of intelligent machines. For example: What is a mind? What is consciousness? Where do we draw the line on responsibility for actions when dealing with robots, computers, programming? Do human beings occupy a privileged place in the universe?
Julian Jaynes was living out of a couple of suitcases in a Princeton dorm in the early 1970s. He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice. He was in his early 50s, a fairly heavy drinker, untenured, and apparently uninterested in tenure. "I don't think the university was paying him on a regular basis," recalls Roy Baumeister, then a student at Princeton and today a professor of psychology at Florida State University. But among the youthful inhabitants of the dorm, Jaynes was working on his masterpiece, and had been for years.
We Do Not See Objects We Detect Objects. 10 Arguments For The Conscious Mind. 4 Arguments For The Inter Mind. Developing An Artificial Inter Mind. 10 Conscious Artificial Intelligence Using The Inter Mind Model. 10 Human Consciousness Transfer Using The Inter Mind Model. 10 Reality Is A Simulation Using The Inter Mind Model. 10 If A Tree Falls In A Forest Using The Inter Mind Model. 10 The Big Bang happens and a new Universe is created. This Universe consists of Matter, Energy, and Space. After billions of years of complicated interactions and processes the Matter, Energy, and Space produce a planet with Conscious Life Forms (CLFs). In the course of their evolution the CLFs will need to See each other in order to live and interact with each other.
Self-awareness: this level of "consciousness" is still mysterious in humans, although there have been several breakthroughs by neuroscientists in understanding more what happens in the human brain when we become aware of something, i.e. when an "I" - or a "self" - emerges and we have subjective experiences. For many, high-level consciousness is perhaps the "last bastion" of humanity in retaining some kind of superiority over the intelligent machines of the future. Nevertheless, creating machines that mimic self-awareness may not be impossible. I say "mimic" because, unless we find an objective way to measure human consciousness, we will forever be unable to conclude whether a machine is "truly" conscious or not. Machines that will have us believe they have a self, or a personality, should be relatively easy to develop.
David Chalmers, who coined the phrase "Hard Problem of consciousness," is arguably the leading modern advocate for the possibility that physical reality needs to be augmented by some kind of additional ingredient in order to explain consciousness--in particular, to account for the kinds of inner mental experience pinpointed by the Hard Problem. One of his favorite tools has been yet another thought experiment: the philosophical zombie. Unlike undead zombies, which seek out brains and generate movie franchises, philosophical zombies look and behave exactly like ordinary human beings. Indeed, they are perfectly physically identical to non‐zombie people. The difference is that they are lacking in any inner mental experience.
Future of work experts (yes, it's a thing now) and AI scientists who spoke to Lateline variously described a future in which there were fewer full-time, traditional jobs requiring one skill set; fewer routine administrative tasks; fewer repetitive manual tasks; and more jobs working for and with "thinking" machines. "We don't make computers that have a lot of emotional intelligence," Professor Walsh says. "We are social people, so the jobs that require lots of emotional intelligence -- being a nurse, marketing jobs, being a psychologist, any job that involves interacting with people -- those will be the safe jobs. Jobs growth is already strong in the caring economy with unmet demand in child care, aged care, health care and education -- although many of those jobs are poorly paid.
Her response when William poses this question: "Well, if you can't tell, does it matter?" But our fascination with robot sentience may be leading us to overlook a deeper, and equally intriguing question: Could robots fundamentally change the way humans engage not with robots... but with other humans? The Hidden Brain Podcast is hosted by Shankar Vedantam and produced by Maggie Penman, Jennifer Schmidt, Renee Klahr, Rhaina Cohen, and Parth Shah. You can also follow us on Twitter @hiddenbrain, and listen for Hidden Brain stories each week on your local public radio station.
Treating the two terms as interchangeable or treating machine learning as a part of the AI landscape only complicates things. Typically, it is separated into techniques that have either a specific thing that they are looking to predict, which is called supervised machine learning, and classifying things, called unsupervised machine learning. AI refers to a very limited subset of methodological techniques, including neural networks or expert systems, that mimic cognitive systems of human intelligence. If you look on the machine learning Wikipedia page, the techniques used for typical AI applications are a subset of machine learning.
According to scientists and legal experts, responding to the bank's warning this November, there is now an urgent need for the development of intelligent algorithms to be put on the political agenda. Top of the agenda as far as Lightfoot is concerned is the economic impact if AI cuts large amounts of jobs and the incomes from people, how will they make a living and what will they do, a concern that Professor Toby Walsh, an expert in AI at Australia's University of New South Wales and a prominent campaigner against the use of AI in military weapons, says is justified and one that needs to be urgently considered. Though Professor Walsh and fellow AI expert Murray Shanahan, Professor of Cognitive Robotics at London's Imperial College were wary of calls for regulation of the sector, which they said, would inhibit research. According to Professor Walsh scientists working in AI have already started to exercise a degree of self-control over the exploitation of the discoveries being made in AI the areas that need to be focussed on are the ramifications of the technology.