Many traditional philosophical questions take new twists in the context of intelligent machines. For example: What is a mind? What is consciousness? Where do we draw the line on responsibility for actions when dealing with robots, computers, programming? Do human beings occupy a privileged place in the universe?
Future of work experts (yes, it's a thing now) and AI scientists who spoke to Lateline variously described a future in which there were fewer full-time, traditional jobs requiring one skill set; fewer routine administrative tasks; fewer repetitive manual tasks; and more jobs working for and with "thinking" machines. "We don't make computers that have a lot of emotional intelligence," Professor Walsh says. "We are social people, so the jobs that require lots of emotional intelligence -- being a nurse, marketing jobs, being a psychologist, any job that involves interacting with people -- those will be the safe jobs. Jobs growth is already strong in the caring economy with unmet demand in child care, aged care, health care and education -- although many of those jobs are poorly paid.
Her response when William poses this question: "Well, if you can't tell, does it matter?" But our fascination with robot sentience may be leading us to overlook a deeper, and equally intriguing question: Could robots fundamentally change the way humans engage not with robots... but with other humans? The Hidden Brain Podcast is hosted by Shankar Vedantam and produced by Maggie Penman, Jennifer Schmidt, Renee Klahr, Rhaina Cohen, and Parth Shah. You can also follow us on Twitter @hiddenbrain, and listen for Hidden Brain stories each week on your local public radio station.
Treating the two terms as interchangeable or treating machine learning as a part of the AI landscape only complicates things. Typically, it is separated into techniques that have either a specific thing that they are looking to predict, which is called supervised machine learning, and classifying things, called unsupervised machine learning. AI refers to a very limited subset of methodological techniques, including neural networks or expert systems, that mimic cognitive systems of human intelligence. If you look on the machine learning Wikipedia page, the techniques used for typical AI applications are a subset of machine learning.
According to scientists and legal experts, responding to the bank's warning this November, there is now an urgent need for the development of intelligent algorithms to be put on the political agenda. Top of the agenda as far as Lightfoot is concerned is the economic impact if AI cuts large amounts of jobs and the incomes from people, how will they make a living and what will they do, a concern that Professor Toby Walsh, an expert in AI at Australia's University of New South Wales and a prominent campaigner against the use of AI in military weapons, says is justified and one that needs to be urgently considered. Though Professor Walsh and fellow AI expert Murray Shanahan, Professor of Cognitive Robotics at London's Imperial College were wary of calls for regulation of the sector, which they said, would inhibit research. According to Professor Walsh scientists working in AI have already started to exercise a degree of self-control over the exploitation of the discoveries being made in AI the areas that need to be focussed on are the ramifications of the technology.
The patient's mother had died in a state hospital of Huntington's disease--a genetic degenerative brain disease. Though I didn't know it at the time, I had run headlong into the "hard problem of consciousness," the enigma of how physical brain mechanisms create purely subjective mental states. My first hint of the interaction between religious feelings and theories of consciousness came from Montreal Neurological Institute neurosurgeon Wilder Penfield's 1975 book, Mystery of the Mind: A Critical Study of Consciousness and the Human Brain. To see how this might work, take a page from Penfield's brain stimulation studies where he demonstrates that the mental sensations of consciousness can occur independently from any thought that they seem to qualify.
The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a paper in 1980 by American philosopher John Searle (1932-). Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally.
Does that mean talking about technologies you're developing at DeepMind that may be ready to be used in Google products? You've said it could be decades before you've truly developed artificial general intelligence. One of the first big programs I remember writing around 11 years old was a program to play Othello, or Reversi maybe you call it in the U.S. It does feel like the closest thing to magic, and I think we can use it for incredible good to solve all sorts of really pressing issues that we have, from climate to healthy aging and so on.
But theories as to how, exactly, grey matter generates consciousness are challenged when a fully-conscious man is found to be missing most of his brain. It was discovered then that his skull was filled largely by fluid, leaving just a thin parameter of actual brain tissue. Doctors believe the man's brain slowly eroded over 30 years due to a build up of fluid in the brain's ventricles, a condition known as "hydrocephalus." Ultimately, Cleereman believes that consciousness is "the brain's theory about itself."
There are two main debates when it comes to artificial intelligence, by which I mean self-aware thinking machines with a general intelligence equal to or greater than that of humans. Indeed, it's part of the singularitarian thesis that intelligent machines will eventually be able to design even smarter machines, which in turn will design even smarter machines, ad infinitum until we (or rather they) have reached the physical limits of what is possible. It won't be the highest intelligence physically possible; it will be the highest intelligence you can build around a soft, fragile human personality. So, assuming we preserve human beings as individuals with distinct lives and personalities, humans will never be the equal to vast AI networks unrestrained by such limitations.