Grosz, Barbara J.


Influencing Flock Formation in Low-Density Settings

arXiv.org Artificial Intelligence

Flocking is a coordinated collective behavior that results from local sensing between individual agents that have a tendency to orient towards each other. Flocking is common among animal groups and might also be useful in robotic swarms. In the interest of learning how to control flocking behavior, recent work in the multiagent systems literature has explored the use of influencing agents for guiding flocking agents to face a target direction. The existing work in this domain has focused on simulation settings of small areas with toroidal shapes. In such settings, agent density is high, so interactions are common, and flock formation occurs easily. In our work, we study new environments with lower agent density, wherein interactions are more rare. We study the efficacy of placement strategies and influencing agent behaviors drawn from the literature, and find that the behaviors that have been shown to work well in high-density conditions tend to be much less effective in lower density environments. The source of this ineffectiveness is that the influencing agents explored in prior work tended to face directions optimized for maximal influence, but which actually separate the influencing agents from the flock. We find that in low-density conditions maintaining a connection to the flock is more important than rushing to orient towards the desired direction. We use these insights to propose new influencing agent behaviors, which we dub "follow-then-influence"; agents act like normal members of the flock to achieve positions that allow for control and then exert their influence. This strategy overcomes the difficulties posed by low density environments.



MIP-Nets: Enabling Information Sharing in Loosely-Coupled Teamwork

AAAI Conferences

People collaborate in carrying out such complex activities as treating patients, co-authoring documents and developing software. While technologies such as Dropbox and Github enable groups to work in a distributed manner, coordinating team members' individual activities poses significant challenges. In this paper, we formalize the problem of "information sharing in loosely-coupled extended-duration teamwork." We develop a new representation, Mutual Influence Potential Networks (MIP-Nets), to model collaboration patterns and dependencies among activities, and an algorithm, MIP-DOI, that uses this representation to reason about information sharing.


AI Support of Teamwork for Coordinated Care of Children with Complex Conditions

AAAI Conferences

Children with complex health conditions require care from a large, diverse set of caregivers that includes parents and community support organizations as well as multiple types of medical professionals. Coordination of their care is essential for good outcomes, and  extensive  research has shown that the use of integrated, team-based care plans improves care coordination. Care plans, however, are rarely deployed in practice. This paper describes barriers to effective implementation of care plans in complex care revealed by a study of care providers treating such children. It draws on teamwork theories, identifying ways AI capabilities could enhance care plan use; describes the design of GoalKeeper, a system to support providers use of care plans; and describes  initial work toward information sharing algorithms for such systems.


The Influence of Emotion Expression on Perceptions of Trustworthiness in Negotiation

AAAI Conferences

When interacting with computer agents, people make inferences about various characteristics of these agents, such as their reliability and trustworthiness. These perceptions are significant, as they influence people's behavior towards the agents, and may foster or inhibit repeated interactions between them. In this paper we investigate whether computer agents can use the expression of emotion to influence human perceptions of trustworthiness. In particular, we study human-computer interactions within the context of a negotiation game, in which players make alternating offers to decide on how to divide a set of resources. A series of negotiation games between a human and several agents is then followed by a "trust game." In this game people have to choose one among several agents to interact with, as well as how much of their resources they will trust to it. Our results indicate that, among those agents that displayed emotion, those whose expression was in accord with their actions (strategy) during the negotiation game were generally preferred as partners in the trust game over those whose emotion expressions and actions did not mesh. Moreover, we observed that when emotion does not carry useful new information, it fails to strongly influence human decision-making behavior in a negotiation setting.


Whither AI: Identity Challenges of 1993-95

AI Magazine

The 1993-95 period presented various "identity challenges" to the field of AI and to AAAI as a leading scientific society for the field. The euphoric days of the mid-1980s AI boom were over, various expectations of those times had not been met, and there was continuing concern about an AI "winter." The major challenge of these years was to chart a path for AI, designed and endorsed by the broadest spectrum of AI researchers, that built on past progress, explained AI's capacity for addressing fundamentally important intellectual problems and realistically predicted its potential to contribute to technological challenges of the coming decade. This reflection piece considers these challenges and the ways in which AAAI helped the field to move forward.


Planning and Acting Together

AI Magazine

People often act together with a shared purpose; they collaborate. Collaboration enables them to work more efficiently and to complete activities they could not accomplish individually. An increasing number of computer applications also require collaboration among various systems and people. Thus, a major challenge for AI researchers is to determine how to construct computer systems that are able to act effectively as partners in collaborative activity. Collaborative activity entails participants forming commitments to achieve the goals of the group activity and requires group decision making and group planning procedures. In addition, agents must be committed to supporting the activities of their fellow participants in support of the group activity. Furthermore, when conflicts arise (for example, from resource bounds), participants must weigh their commitments to various group activities against those for individual activities. This article briefly reviews the major features of one model of collaborative planning called SHARED-PLANS (Grosz and Kraus 1999, 1996). It describes several current efforts to develop collaborative planning agents and systems for human-computer communication based on this model. Finally, it discusses empirical research aimed at determining effective commitment strategies in the SHAREDPLANS context.


Collaborative Systems (AAAI-94 Presidential Address)

AI Magazine

From the scientific perspective, the development of theories and mechanisms to enable building collaborative systems presents exciting research challenges across AI subfields. From the applications perspective, the capability to collaborate with users and other systems is essential if large-scale information systems of the future are to assist users in finding the information they need and solving the problems they have. Key features of collaborative activity are described, the scientific base provided by recent AI research is discussed, and several of the research challenges posed by collaboration are presented. It is further argued that research on, and the development of, collaborative systems should itself be a collaborative endeavor -- within AI, across subfields of computer science, and with researchers in other fields.


Collaborative Systems (AAAI-94 Presidential Address)

AI Magazine

The construction of computer systems that are intelligent, collaborative problem-solving partners is an important goal for both the science of AI and its application. From the scientific perspective, the development of theories and mechanisms to enable building collaborative systems presents exciting research challenges across AI subfields. From the applications perspective, the capability to collaborate with users and other systems is essential if large-scale information systems of the future are to assist users in finding the information they need and solving the problems they have. In this address, it is argued that collaboration must be designed into systems from the start; it cannot be patched on. Key features of collaborative activity are described, the scientific base provided by recent AI research is discussed, and several of the research challenges posed by collaboration are presented. It is further argued that research on, and the development of, collaborative systems should itself be a collaborative endeavor -- within AI, across subfields of computer science, and with researchers in other fields.


Member's Forum

AI Magazine

The AAAI conference's conservative style, a plan to revitalize the AAAI conference, and attendance at the AAAI business meeting.