Goto

Collaborating Authors

 sloman


Inside Jeffrey Epstein's Forgotten AI Summit

WIRED

In 2002, artificial intelligence was still in winter. Despite decades of effort, dreams of bestowing computers with human-like cognition and real-world understanding had not materialized. To look for a way forward, a small group of scientists gathered for "The St. Thomas Common Sense Symposium." AI pioneer Marvin Minsky was the central presence, along with his protégé Pushpinder Singh. After the symposium, Minsky, Singh, and renowned philosopher Aaron Sloman published a paper on the group's ideas for how to reach human-like AI.


AI Don't Know Jack? – MetaDevo

#artificialintelligence

Think your AI understands the meanings of words? Or understands anything at all? Guess again. There's a big issue inherent in trying to make artificial minds that understand like a human does. It's called the Symbol Grounding Problem1S. TLDR: How can understanding in an AI be made intrinsic to the system, rather than just parasitic on the meanings in the minds of the developers / trainers?


The meta-problem and the transfer of knowledge between theories of consciousness: a software engineer's take

Kvassay, Marcel

arXiv.org Artificial Intelligence

This contribution examines two radically different explanations of our phenomenal intuitions, one reductive and one strongly non-reductive, and identifies two germane ideas that could benefit many other theories of consciousness. Firstly, the ability of sophisticated agent architectures with a purely physical implementation to support certain functional forms of qualia or proto-qualia appears to entail the possibility of machine consciousness with qualia, not only for reductive theories but also for the nonreductive ones that regard consciousness as ubiquitous in Nature. Secondly, analysis of introspective psychological material seems to hint that, under the threshold of our ordinary waking awareness, there exist further'submerged' or'subliminal' layers of consciousness which constitute a hidden foundation and support and another source of our phenomenal intuitions. These'submerged' layers might help explain certain puzzling phenomena concerning subliminal perception, such as the apparently'unconscious' multisensory integration and learning of subliminal stimuli. As a researcher in intelligent technologies, I have long been interested in scholarly debates about consciousness.


Response to Sloman's Review of Affective Computing

AI Magazine

Sloman was one of the first in the AI community to write about the role of emotion in computing (Sloman and Croucher 1981), and I value his insight into theories of emotional and intelligent systems. Alas, Sloman's review dwells largely on some details related to unknown features of human emotion; hence, I don't think the review captures the flavor of the book. However, he does raise interesting points, as well as potential misunderstandings, both of which I am grateful for the opportunity to comment on. Sloman writes that I "welcome emotion detectors in a wide range of contexts and relationships, for example, teacher and pupil." This might sound innocuous, but its presumption of the existence of emotion detectors is not.


Ubiquitous Computing and Sensing

AI Magazine

Some experts are likely to be fiercely critical because of omissions or errors. Others with tunnel vision are likely to miss the point. Rosalind Picard, with considerable courage, addresses a broad collection of themes, including the nature of motivation, emotions, and feeling; the detection of emotional and other affective states and processes; the nature of intelligence and the relationships between intelligence and emotions; the physiology of the brain and other aspects of human physiology relevant to affective states; requirements for effective human-computer interfaces in a wide range of situations; wearable devices with a range of sensing and communication functions; philosophical and ethical issues relating to computers of the future; and a brief encounter with theology. This is a book with a bold vision. Some readers will find it inspiring and mind stretching.


Designing Architectures for Human-Level Intelligence

AI Magazine

To build a machine that has "common sense" was once a principal goal in the field of artificial intelligence. But most researchers in recent years have retreated from that ambitious aim. Instead, each developed some special technique that could deal with some class of problem well, but does poorly at almost everything else. We are convinced, however, that no one such method will ever turn out to be "best," and that instead, the powerful AI systems of the future will use a diverse array of resources that, together, will deal with a great range of problems. To build a machine that's resourceful enough to have humanlike common sense, we must develop ways to combine the advantages of multiple methods to represent knowledge, multiple ways to make inferences, and multiple ways to learn.


Siri storm caused by economist's comments

BBC News

A leading economist has inadvertently caused a storm by saying he preferred the voice on the iPhone Siri virtual assistant to be male because he felt that made it more trustworthy. Nobel prize laureate Sir Christopher Pissarides's comments at a conference in Norway attracted fierce criticism. He told the BBC he apologised for upsetting people and his comment was meant to be "light-hearted". "It's a mistake and I'm sorry, but the audience was laughing." Sir Christopher was part of an all-male panel taking part in a Q&A audience discussion at the Starmus Festival in Trondheim about the future of humanity.



What can businesses do to make innovation happen? ZDNet

AITopics Original Links

However, the HP-Aecus study also shows that innovation isn't just one thing. It's a spectrum of ideas that might affect how an individual team functions, through big IT projects, to major scientific or technological breakthroughs. The trouble is that although almost everyone thinks innovation is desirable, just six percent of companies see themselves as very good at it, with only a further one in four businesses saying they're seeing regular and successful achievements from their efforts. Together, those two groups account for 31 percent of organisations and demonstrate the disparity between the desire for increasing innovation and its realisation. To close that gap, and rather than leaving innovation to chance, the first step for most businesses has to be the creation of the right environment for it to flourish.


AI and Philosophy: How Can You Know the Dancer from the Dance?

AITopics Original Links

Aaron Sloman was teaching philosophy at the University of Sussex in 1969, when he met Max Clowes. Clowes had done pioneering work in computer image interpretation. Now, he was asking Sloman to drop the way he learned to do philosophy at Oxford and to start studying artificial intelligence instead. Nine years later, Sloman published em The Computer Revolution in Philosophy /em, in which he extolled AI's power to extend our ability to think. He's been examining the "deep continuity" between AI and very old problems in philosophy ever since.