Many traditional philosophical questions take new twists in the context of intelligent machines. For example: What is a mind? What is consciousness? Where do we draw the line on responsibility for actions when dealing with robots, computers, programming? Do human beings occupy a privileged place in the universe?
Future of work experts (yes, it's a thing now) and AI scientists who spoke to Lateline variously described a future in which there were fewer full-time, traditional jobs requiring one skill set; fewer routine administrative tasks; fewer repetitive manual tasks; and more jobs working for and with "thinking" machines. "We don't make computers that have a lot of emotional intelligence," Professor Walsh says. "We are social people, so the jobs that require lots of emotional intelligence -- being a nurse, marketing jobs, being a psychologist, any job that involves interacting with people -- those will be the safe jobs. Jobs growth is already strong in the caring economy with unmet demand in child care, aged care, health care and education -- although many of those jobs are poorly paid.
Her response when William poses this question: "Well, if you can't tell, does it matter?" But our fascination with robot sentience may be leading us to overlook a deeper, and equally intriguing question: Could robots fundamentally change the way humans engage not with robots... but with other humans? The Hidden Brain Podcast is hosted by Shankar Vedantam and produced by Maggie Penman, Jennifer Schmidt, Renee Klahr, Rhaina Cohen, and Parth Shah. You can also follow us on Twitter @hiddenbrain, and listen for Hidden Brain stories each week on your local public radio station.
Treating the two terms as interchangeable or treating machine learning as a part of the AI landscape only complicates things. Typically, it is separated into techniques that have either a specific thing that they are looking to predict, which is called supervised machine learning, and classifying things, called unsupervised machine learning. AI refers to a very limited subset of methodological techniques, including neural networks or expert systems, that mimic cognitive systems of human intelligence. If you look on the machine learning Wikipedia page, the techniques used for typical AI applications are a subset of machine learning.
According to scientists and legal experts, responding to the bank's warning this November, there is now an urgent need for the development of intelligent algorithms to be put on the political agenda. Top of the agenda as far as Lightfoot is concerned is the economic impact if AI cuts large amounts of jobs and the incomes from people, how will they make a living and what will they do, a concern that Professor Toby Walsh, an expert in AI at Australia's University of New South Wales and a prominent campaigner against the use of AI in military weapons, says is justified and one that needs to be urgently considered. Though Professor Walsh and fellow AI expert Murray Shanahan, Professor of Cognitive Robotics at London's Imperial College were wary of calls for regulation of the sector, which they said, would inhibit research. According to Professor Walsh scientists working in AI have already started to exercise a degree of self-control over the exploitation of the discoveries being made in AI the areas that need to be focussed on are the ramifications of the technology.
The patient's mother had died in a state hospital of Huntington's disease--a genetic degenerative brain disease. Though I didn't know it at the time, I had run headlong into the "hard problem of consciousness," the enigma of how physical brain mechanisms create purely subjective mental states. My first hint of the interaction between religious feelings and theories of consciousness came from Montreal Neurological Institute neurosurgeon Wilder Penfield's 1975 book, Mystery of the Mind: A Critical Study of Consciousness and the Human Brain. To see how this might work, take a page from Penfield's brain stimulation studies where he demonstrates that the mental sensations of consciousness can occur independently from any thought that they seem to qualify.
The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a paper in 1980 by American philosopher John Searle (1932-). Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally.
The superbrain that predicts the weather will be in a different kingdom of mind from the intelligence woven into your clothes. The superbrain that predicts the weather accurately will be in a completely different kingdom of mind from the intelligence woven into your clothes. The types of artificial minds we are making now and will make in the coming century will be designed to perform specialized tasks, usually tasks beyond what we can do. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can't think.