THE COMPUTER REVOLUTION IN PHILOSOPHY (1978): Chapter 9

AITopics Original Links

Seeing the significance in a collection of experimental results, grasping a character in a play or novel, and diagnosing an illness on the basis of a lot of ill-defined symptoms, all require this ability to make a'Gestalt' emerge from a mass of information. A much simpler example is our ability to see something familiar in a picture like Figure 1. How does a'Gestalt', a familiar word, emerge from all those dots? Close analysis shows that this kind of ability is required even for ordinary visual perception and speech understanding, where we are totally unaware that we are interpreting untidy and ambiguous sense-data. In order to appreciate these unconscious achievements, try listening to very short extracts from tapes of human speech (about the length of a single word), or looking at manuscripts, landscapes, street scenes and domestic objects through a long narrow tube.


Interactions between philosophy and AI: The role of intuition and non-logical reasoning in intelligence

Classics

This paper echoes, from a philosophical standpoint, the claim of McCarthy and Hayes that Philosophy and Artificial Intelligence have important relations. Philosophical problems about the use of “intuition” in reasoning are related, via a concept of anlogical representation, to problems in the simulation of perception, problem-solving and the generation of useful sets of possibilities in considering how to act. The requirements for intelligent decision-making proposed by McCarthy and Hayes are criticised as too narrow, and more general requirements are suggested instead.See also: Artificial Intelligence, Volume 2, Issues 3–4, Winter 1971, Pages 209–225In IJCAI 1971: INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE.. Revised paper in Artificial Intelligence 2:209- 225


Why Some Machines May Need Qualia and How They Can Have Them: Including a Demanding New Turing Test for Robot Philosophers

AAAI Conferences

This paper extends three decades of work arguing that instead of focusing only on (adult) human minds, we should study many kinds of minds, natural and artificial, and try to understand the space containing all of them, by studying what they do, how they do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements that are met by different sorts of minds, i.e. the niches that they occupy, (b) understanding the space of possible designs, and (c) understanding the complex and varied relationships between requirements and designs. Attempts to model or explain any particular phenomenon, such as vision, emotion, learning, language use, or consciousness lead to muddle and confusion unless they are placed in that broader context. in part because current ontologies for specifying and comparing designs are inconsistent and inadequate. A methodology for making progress is summarised and a novel requirement proposed for humanlike philosophical robots, namely that a single generic design, in addition to meeting many other more familiar requirements, should be capable of developing different and opposed viewpoints regarding philosophical questions about consciousness, and the socalled hard problem. No designs proposed so far come close. Could We Be Discussing Bogus Concepts? Many debates about consciousness appear to be endless because of conceptual confusions preventing clarity as to what the issues are and what does or does not count as progress. This makes it hard to decide what should go into a machine if it is to be described as'conscious', or as'having qualia'. Triumphant demonstrations by some AI developers of machines with alleged competences (seeing, having emotions, learning, being autonomous, being conscious, having qualia, etc.) are regarded by others as proving nothing of interest because the systems do not satisfy their definitions or their requirements-specifications.


Artificial Intelligence: Structures and Strategies for Complex Problem Solving

AITopics Original Links

Many and long were the conversations between Lord Byron and Shelley to which I was a devout and silent listener. During one of these, various philosophical doctrines were discussed, and among others the nature of the principle of life, and whether there was any probability of its ever being discovered and communicated. They talked of the experiments of Dr. Darwin (I speak not of what the doctor really did or said that he did, but, as more to my purpose, of what was then spoken of as having been done by him), who preserved a piece of vermicelli in a glass case till by some extraordinary means it began to move with a voluntary motion. Not thus, after all, would life be given. Perhaps a corpse would be reanimated; galvanism had given token of such things: perhaps the component parts of a creature might be manufactured, brought together, and endued with vital warmth (Butler 1998).


SS04-02-024.pdf

AAAI Conferences

This is a set of notes relating to an invited talk at the cross-disciplinary workshop on Architectures for Modeling Emotion at the AAAI Spring Symposium at Stanford University in March 2004. The organisers of the workshop note that work on emotions "is often carried out in an ad hoc manner", and hope to remedy this by focusing on two themes (a) validation of emotion models and architectures, and (b) relevance of recent findings from affective neuroscience research. I shall focus mainly on (a), but in a manner which, I hope is relevant to (b), by addressing the need for conceptual clarification to remove, or at least reduce, the adhocery, both in modelling and in empirical research. In particular I try to show how a design-based approach can provide an improved conceptual framework and sharpen empirical questions relating to the study of mind and brain. From this standpoint it turns out that what are normally called emotions are a somewhat fuzzy subset of a larger class of states and processes that can arise out of interactions between different mechanisms in an architecture. What exactly the architecture is will determine both the larger class and the subset, since different architectures support different classes of states and processes. In order to develop the design-based approach we need a good ontology for characterising varieties of architectures and the states and processes that can occur in them. At present this too is often a matter of much ad-hocery. We propose steps toward a remedy.