situation
The Computational Metaphor and Artificial Intelligence: A Reflective Examination of a Theoretical Falsework
AI. Specifically, we address three Just how little can be illustrated by the reaction to Winograd and Flores's (1986) recent book Understanding Computers and Cognition. In personal comments, the book and its authors have been savaged. Published comments are, of course, more temperate (Vellino et al. 1987) but still reveal the hypersensitivity of the Penrose's (1989) even more recent book The Emperor's New Mind have been observed. Like Suchman (1987) and Clancey (1987), we feel that insights of significant value are to be gained from an objective consideration of traditional and alternative perspectives. Some efforts in this direction are evident (Haugeland [1985], Hill [1989], and Born [1987], for example), but the issue requires additional and ongoing attention.
Universal Planning: An (Almost) Universally Bad Idea
To present a sharp criticism of the approach known as universal planning, I begin by giving a precise definition of it. The key idea in this work is that an agent is working to achieve some goal and that to determine what to do next in the pursuit of this goal, the agent finds its current situation in a large table that prescribes the correct action to take. Of course, the action suggested by the table might simply be, "Think about your current situation and decide what to do next." This method is, in many ways, representative of the conventional approach to planning; however, what distinguishes universal plans from conventional plans is that the action suggested by a universal plan is always a primitive one that the agent can execute immediately (Agre and Chapman 1987; Drummond 1988; Kaelbling 1988; Nilsson 1989; Rosenschein and Kaelbling 1986; Schoppers 1987). Several authors have recently suggested that a possible approach to planning in uncertain domains is to analyze all possible situations beforehand and then store information about what to do in each.
Ray Reiter's Knowledge in Action
What Ray Reiter has done has taken a set of ideas worked out by him and his collaborators over the last 11 years and recrystallized them into a sustained and consistent presentation. This is not a collection of those papers but a complete rewrite that avoids the usual repetition and notational inconsistency that one might expect. It makes one wish everyone as prolific as Reiter would copy his example--but because that's unlikely, we must be grateful for what he has given us. In case you haven't heard, Reiter and his crew, starting with the publication of Reiter (1991), breathed new life into the situation calculus (Mc-Carthy and Hayes 1969) that had gotten the reputation of being of limited expressiveness. The basic concept of the calculus is, of course, the situation, which we can think of as a state of affairs, that is, a complete specification of the truth values of all propositions (in a suitable logical language), although that's closer to McCarthy's and Hayes's traditional formulation than the analysis Reiter settles on (which I describe later).
Research Workshop on Expert Judgment, Human Error, and Intelligent Systems
This workshop brought together 20 computer scientists, psychologists, and human-computer interaction (HCI) researchers to exchange results and views on human error and judgment bias. Human error is typically studied when operators undertake actions, but judgment bias is an issue in thinking rather than acting. Both topics are generally ignored by the HCI community, which is interested in designs that eliminate human error and bias tendencies. As a result, almost no one at the workshop had met before, and the discussion for most participants was novel and lively. Many areas of previously unexamined overlap were identified.
Practically Coordinating
To coordinate, intelligent agents might need to know something about themselves, about each other, about how others view themselves and others, about how others think others view themselves and others, and so on. Taken to an extreme, the amount of knowledge an agent might possess to coordinate its interactions with others might outstrip the agent's limited reasoning capacity (its available time, memory, and so on). Much of the work in studying and building multiagent systems has thus been devoted to developing practical techniques for achieving coordination, typically by limiting the knowledge available to, or necessary for, agents. This article categorizes techniques for keeping agents suitably ignorant so that they can practically coordinate and gives a selective survey of examples of these techniques for illustration. Certainly, people who know much (or think they know much) are sometimes subject to cockiness, confusion, paralysis, resignation, or other unpleasant states.
Using Reactive and Adaptive Behaviors to Play Soccer
This work deals with designing simple behaviors to allow quadruped robots to play soccer. The robots are fully autonomous; they cannot exchange messages between each other. They are equipped with a charge-coupled-device camera that allows them to detect objects in the scene. In addition to vision problems such as changing lighting conditions and color confusion, legged robots must cope with "bouncing images" because of successive legs hitting the ground. When defining task-driven strategies, the designer has to take into account the influences of the locomotion and vision systems on the behavior.
- Leisure & Entertainment > Sports > Soccer (1.00)
- Information Technology (1.00)
Steps toward Formalizing Context
The importance of contextual reasoning is emphasized by various researchers in AI. (A partial list includes John McCarthy and his group, R. V. Guha, Yoav Shoham, Giuseppe Attardi and Maria Simi, and Fausto Giunchiglia and his group.) Here, we survey the problem of formalizing context and explore what is needed for an acceptable account of this abstract notion. Although the word context is frequently used in descriptions, explanations, and analyses of computer programs in these areas, its meaning is frequently left to the reader's understanding; that is, it is used in an implicit and intuitive manner. An example of how contexts may help in AI is found in McCarthy's (constructive) criticism (McCarthy 1984) of I wish honorable gentlemen would have the fairness to give the entire context of what I did say, and not pick out detached words (R. Cobden [1849], quoted in Oxford English Dictionary [1978], p. 902). The main motivation for studying formal contexts is to resolve the problem of generality in AI.
Steps toward a Cognitive Vision System
An adequate natural language description of developments in a real-world scene can be taken as proof of "understanding what is going on." An algorithmic system that generates natural language descriptions from video recordings of road traffic scenes can be said to "understand" its input to the extent that algorithmically generated text is acceptable to the humans judging it. The ability to present a "variant formulation" without distorting the essential parts of the original message is taken as a cue that these essentials have been "understood." During art lessons, in particular those concerned with classical or ecclesiastic paintings, students are initially invited to merely describe what they see. Frequently, considerable a priori knowledge about ancient mythology or biblical traditions is required to succinctly characterize the depicted scene. Lack of the corresponding knowledge about other cultures can make it difficult for someone with only a European education to really understand and describe in an appropriate manner a painting by, for example, a Far East classic artist. Familiar human experiences mentioned in the preceding paragraph will now be "morphed" into a scientific challenge: to design and implement an algorithmic engine that generates an appropriate textual description of essential developments in a video sequence recorded from a real-world scene. Such an algorithmic engine will serve as one example of a cognitive vision system (CVS), which leaves room, as the experienced reader has noticed, for there to be more than one way to introduce the concept of a CVS. An alternative clearly consists in coupling a computer vision system with a robotic system of some kind and assessing the reactions of such a compound system. To whomever accepts the formulation, "one of the actions available to an agent is to produce language. This is called a speech act. Russell and Norvig (1995)" is unlikely to consider the two variants of a CVS alluded to previously as being fundamentally different. With regard to the first CVS version in particular, the following remarks are submitted for consideration: Obviously, we avoid a precise definition of understanding in favor of having humans compare the reaction of an algorithmic engine to that expected from a human. This fuzzy approach toward the circumscription of a CVS opens the road to constructive criticism--that is, to incremental system improvement--by pinpointing aspects of an output text that are not yet considered satisfactory.
Task Communication Through Natural Language and Graphics
With increases in the complexity of information that must be communicated either by or to computers comes a corresponding need to find ways to communicate that information simply and effectively. It makes little sense to force the burden of communication on a single medium, restricted to just one of spoken or written text, gestures, diagrams, or graphical animation, when in many situations information is only communicated effectively through combinations of media. In response to requests for directions, respondents often choose to provide both a sketch map (for visual indications of relative distance, spatial relationships, etc.) as well as verbal guidance as to landmarks to attend to, obstacles to watch out for, opportunities to take, etc. Instructors training a subject in a new task often choose to present the task in at least two ways: they demonstrate what motions the trainee is supposed to carry out, using direct training, film or graphic media, and they convey what intentional actions those motions are meant to represent, through naturallanguage text or speech. Graphic media (diagrams and animation) can provide a way of visualizing significant patterns in situations (cf. the current interest in Scientific Visualization), while natural-language text (either spoken or written) can provide needed information on what the patterns may mean, why they may have developed, or what may be done to deal with them. Naturallanguage narration is necessary to convey the meaning and significance of such visualizations.)
Report on the 2007 Workshop on Modeling and Reasoning in Context
The fourth Modeling and Reasoning in Context (MRC) workshop was held on August 20-21, 2007, in conjunction with the Sixth International and Interdisciplinary Conference on Modeling and Using Context, at Roskilde University, Denmark. This year's workshop included a special track on the role of contextualization in human tasks (CHUT). The overall goal of the workshop was to further the understanding, development, and application of AI methods for context-sensitive information technology. The Modeling and Reasoning in Context (MRC) workshop series, begun in 2004, brings together researchers and practitioners to exchange ideas and results on modeling and reasoning issues for context-sensitive systems. MRC 2007 broadened the focus to also highlight studies of contextualization in human tasks (CHUT), to explore the practical relationships between tasks, actors, and workplace context that may shape system design. The workshop was split into formal paper presentations and discussion sessions. The first two discussions combined themed panels with audience participation, while the closing free-form discussion offered the opportunity for participants to examine issues of their choice and provide closing perspective on the workshop as a whole. Following an MRC tradition, the workshop also included an informal dinner, enabling participants to continue their discussions in a traditional Copenhagen restaurant. The MRC paper presentations covered topics such as ontology-based context models, the benefits of multilayered models (combining general metalevel and domain models with applicationspecific instances), the use of situation lattices to achieve situation awareness, user modeling in mobile ambient intelligent systems, and middleware for managing context. These were illustrated for a range of tasks, such as contextualized software reuse and an email filtering approach using multiple heterogeneous sources of contextual data to infer when and where to deliver messages. The contextualization of human tasks was demonstrated from multiple perspectives as well, ranging from analysis of interpersonal work practices, to discover contextual parameters, to an application to improve drivers' situation awareness. These diverse presentations gave a good overview of the various uses of context, their benefits, and their challenges for modeling and reasoning, providing a starting point for the discussions. There was enthusiastic participation in the workshop's discussions, and many participants considered the exchanges there to be the most rewarding part of the workshop.