Narrative tools to improve Collaborative Sense-Making

AAAI Conferences

Narration is a usual mode of sense-making in the new, ambiguous or equivocal situations. Here we characterize the role of narration in situations of comprehension and collective problem solving. Then we present an approach of modeling of narrative knowledge and the associated tool - HyperStoria-which makes it possible to assist a group in the acquisition and the modeling of narratives charts starting from graphs of goals and events.


Ranking Agent Statements for Building Evolving Ontologies

AAAI Conferences

In this paper a methodology is described for ranking information received by different agents, based on previous experience with them. These rankings again are used for asking the right questions to the right agents. In this way agents can build up a reputation. The methods in this paper are strongly influenced on human heuristics regarding the assignment of confidence ratings on humans. The methods provide a solution to current problems with ontologies: (1) handling contradicting and sloppy information, (2) efficient network use instead of broadcasting information and (3) dealing with ontological drift.


Negotiation, compromise, and collaboration in interpersonal and human-computer conversations

AAAI Conferences

The meaning of any message from Clyte had to be negotiated the way a company of soldiers negotiates a minefield. People are very adept at recognizing when something they said has been misunderstood by a conversational partner and at recognizing when they themselves have misunderstood something that was said earlier in the conversation. In either case, they will usually say something to repair the situation and regain mutual understanding. If computers are ever to converse with humans in natural language, they must be as adept as people are in their ability to detect and repair both their own occasional misunderstandings and also those of their conversational partner--perhaps even more so, as this skill will be needed to compensate for the likely deficiencies of computers in other aspects of understanding, which will lead to frequent misunderstandings and non-understandings on each side. The processes through which conversational repairs take place include negotiation, collaboration, and construction of meaning.


Meaning Negotiation

AAAI Conferences

Menlo Park, California Copyright 2002, AAAI Press The American Association for Artificial Intelligence 445 Burgess Drive Menlo Park, California 94025 USA AAAI maintains compilation copyright for this technical report and retains the right of first refusal to any publication (including electronic distribution) arising from this AAAI event. Please do not make any inquiries or arrangements for hardcopy or electronic publication of all or part of the papers contained in these working notes without first exploring the options available through AAAI Press and AI Magazine. A signed release of this right by AAAI is required before publication by a third party. Negotiation, Compromise, and Collaboration in Interpersonal and Human-Computer Conversation / 1 Graeme Hirst Narrative Tools to Improve Collaborative Sense-Making / 5 Eddie Soulier, Jean Caussanel Ranking Agent Statements for Building Evolving Ontologies / 10 Ronny Siebes and Frank van Harmelen An Approach to Cooperating Organizational Memories Based on Semantic Negotiation and Unification / 13 Martin Schaaf and Ludger van Elst Meaning Negotiation and Communicative Rationality / 17 Roger A. Young Negotiation Games and Conflict Resolution in Logical Semantics / 25 Anti Pietarinen Negotiating Domain Ontologies in Distributed Organizational Memories / 32 Ludger van Elst and Andreas Abecker The Reconciler: Supporting Actors in Meaning Negotiation / 36 Marcello Sarini and Carla Simone Linguistic Based Matching of Local Ontologies / 42 Bernardo Magnini, Luciano Serafini and Manuela Speranza A Tool for Mapping between Two Ontologies Using Explicit Information / 51 Sushama Prasad, Yun Peng and Timothy Finin Logical Systems: Towards Protocols for Web-Based Meaning Negotiation / 56 Jim Farrugia Evaluation Framework for Local Ontologies Interoperability / 60 Paolo Avesani ConTeXtualized Local Ontologies Specification via CTXML / 64 Paolo Bouquet, Antonia Dona, Luciano Serafini, and Stefano Zanobini A Foundation for Strange Agent Negotiation / 72 John Avery and John Yearwood Reconciling Ontological Differences for Intelligent Agents / 78 Kendall Lister and Leon Sterling


Readapting multimodal presentations heterogenous user groups to

AAAI Conferences

This article exploits the possibilities of mixed presentation modes in a situation where both public and private display screens as well as public and private audio channels can be accessed by the users. This will allow the users to share information with a group, while still being able to receive individual information at the same time. Special strategies are identified that readapt an already running public presentation to the interests of late arriving users. Following these strategies, the generation of multimodal presentations for both public and private devices is described.


A Flexible Architecture for a Multimodal Robot Control Interface

AAAI Conferences

Despite increased activity in robotics, relatively few advances have been made in the area of human-robot interaction. The most successful interfaces in the recent RoboCup Rescue competition were teleoperational interfaces. However, some believe that teams of robots under supervisory control may ultimately lead to better performance in real world operations. Such robots would be commanded with high-level commands rather than batch sequences of low-level commands. For humans to command teams of semi-autonomous robots in a dynamically changing environment, the human-robot interface will need to include several aspects of humanhuman communication. These aspects include cooperatively detecting and resolving problems, making using of conversational and situational context, maintaining contexts across multiple conversations and use of verbal and nonverbal information. This paper describes a demonstration system and dialogue architecture for the multimodal control of robots that is flexibly adaptable to accommodate capabilities and limitations on both PDA and kiosk environments.


WS02-08-006.pdf

AAAI Conferences

In order to produce coherent multimodal output a presentation planner in a multimodal dialogue system must have a notion of the types of the multimodalities, which are currently present in the system. More specifically the planner needs information about the multimodal properties and rendering capabilities of the multimodalities. Therefore it is necessary to define an output multimodality model that can properly describe the available renderers in sufficient detail and on the other hand keep a level of abstraction that enables the presentation planner to support a large set of different renderer types.


Cyber Assist for Situated Human Support

AAAI Conferences

The current information processing tools such as personal computers and Internet are not always easy to use. Novice users often have to take a class to master them. The research theme of the Cyber Assist Research Center is the development of human-centered information processing technologies, which can provide situated information that I-want-here-now through "natural interface" (Nakashima and Hasida 2001). In other words, we are strengthening a variety of technologies that link digital realm represented by Internet to us people who live in the real world. The aim of the talk is to introduce our research plans together with our view of the future informationprocessing environment.


A Method for Human-Artifact Communication based on Active Affordance

AAAI Conferences

The development of computer technology has created artifacts that have more complex and intelligent functions. Such artifacts need more sophisticated interfaces than primitive artifacts. In this paper, we discuss which characteristics are appropriate for interfaces with artifacts and propose a concept of active affordance. We describe an autonomous mobile chair that we built as a test bed for active affordance. We also describe a experiment that we performed with a real robot that show the validity of our proposed method.


Triggering Memories of Conversations using Multimodal Classifiers

AAAI Conferences

Our personal conversation memory agent is a wearable'experience collection' system, which unobtrusively records the wearer's conversation, recognizes the face of the dialog partner and remembers his/her voice. When the system sees the same person's face or hears the same voice it uses a summary of the last conversation with this person to remind the wearer. To correctly identify a person and help remember the earlier conversation, the system must be aware of the current situation, as analyzed from audio and video streams, and classify the situation by combining these modalities. Multimodal classifiers, however, are relatively unstable in the uncontrolled real word environments, and a simple linear interpolation of multiple classification judgments cannot effectively combine multimodal classifiers. We propose a meta-classification strategy using a Support Vector Machine as a new combination strategy. Experimental results show that combining face recognition and speaker identification by meta-classification is dramatically more effective than a linear combination. This meta-classification approach is general enough to be applied to any situation-aware application that needs to combine multiple classifiers.