Goto

Collaborating Authors

 Grosz, Barbara J.


A Century Long Commitment to Assessing Artificial Intelligence and its Impact on Society

arXiv.org Artificial Intelligence

In September 2016, Stanford's "One Hundred Year Study on Artificial Intelligence" project (AI100) issued the first report of its planned long-term periodic assessment of artificial intelligence (AI) and its impact on society. The report, entitled "Artificial Intelligence and Life in 2030," examines eight domains of typical urban settings on which AI is likely to have impact over the coming years: transportation, home and service robots, healthcare, education, public safety and security, low-resource communities, employment and workplace, and entertainment. It aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI and its potential and to help guide decisions in industry and governments, as well as to inform research and development in the field. This article by the chair of the 2016 Study Panel and the inaugural chair of the AI100 Standing Committee describes the origins of this ambitious longitudinal study, discusses the framing of the inaugural report, and presents the report's main findings. It concludes with a brief description of the AI100 project's ongoing efforts and planned next steps.


Influencing Flock Formation in Low-Density Settings

arXiv.org Artificial Intelligence

Flocking is a coordinated collective behavior that results from local sensing between individual agents that have a tendency to orient towards each other. Flocking is common among animal groups and might also be useful in robotic swarms. In the interest of learning how to control flocking behavior, recent work in the multiagent systems literature has explored the use of influencing agents for guiding flocking agents to face a target direction. The existing work in this domain has focused on simulation settings of small areas with toroidal shapes. In such settings, agent density is high, so interactions are common, and flock formation occurs easily. In our work, we study new environments with lower agent density, wherein interactions are more rare. We study the efficacy of placement strategies and influencing agent behaviors drawn from the literature, and find that the behaviors that have been shown to work well in high-density conditions tend to be much less effective in lower density environments. The source of this ineffectiveness is that the influencing agents explored in prior work tended to face directions optimized for maximal influence, but which actually separate the influencing agents from the flock. We find that in low-density conditions maintaining a connection to the flock is more important than rushing to orient towards the desired direction. We use these insights to propose new influencing agent behaviors, which we dub "follow-then-influence"; agents act like normal members of the flock to achieve positions that allow for control and then exert their influence. This strategy overcomes the difficulties posed by low density environments.



AI Support of Teamwork for Coordinated Care of Children with Complex Conditions

AAAI Conferences

Children with complex health conditions require care from a large, diverse set of caregivers that includes parents and community support organizations as well as multiple types of medical professionals. Coordination of their care is essential for good outcomes, and ย extensive ย research has shown that the use of integrated, team-based care plans improves care coordination. Care plans, however, are rarely deployed in practice.ย This paper describes barriers to effective implementation of care plans in complex care revealed by a study of care providers treating such children. It draws on teamwork theories, identifying ways AI capabilities could enhance care plan use; describes the design of GoalKeeper, a system to support providers use of care plans; and describes ย initial work toward information sharing algorithms for such systems.


To Share or Not to Share? The Single Agent in a Team Decision Problem

AAAI Conferences

This paper defines the "Single Agent in a Team Decision" (SATD) problem. SATD differs from prior multi-agent communication problems in the assumptions it makes about teammates' knowledge of each other's plans and possible observations. The paper proposes a novel integrated logical-decision-theoretic approach to solving SATD problems, called MDP-PRT. Evaluation of MDP-PRT shows that it outperforms a previously proposed communication mechanism that did not consider the timing of communication and compares favorably with a coordinated Dec-POMDP solution that uses knowledge about all possible observations.


The Influence of Emotion Expression on Perceptions of Trustworthiness in Negotiation

AAAI Conferences

When interacting with computer agents, people make inferences about various characteristics of these agents, such as their reliability and trustworthiness. These perceptions are significant, as they influence people's behavior towards the agents, and may foster or inhibit repeated interactions between them. In this paper we investigate whether computer agents can use the expression of emotion to influence human perceptions of trustworthiness. In particular, we study human-computer interactions within the context of a negotiation game, in which players make alternating offers to decide on how to divide a set of resources. A series of negotiation games between a human and several agents is then followed by a "trust game." In this game people have to choose one among several agents to interact with, as well as how much of their resources they will trust to it. Our results indicate that, among those agents that displayed emotion, those whose expression was in accord with their actions (strategy) during the negotiation game were generally preferred as partners in the trust game over those whose emotion expressions and actions did not mesh. Moreover, we observed that when emotion does not carry useful new information, it fails to strongly influence human decision-making behavior in a negotiation setting.


Whither AI: Identity Challenges of 1993-95

AI Magazine

The 1993-95 period presented various "identity challenges" to the field of AI and to AAAI as a leading scientific society for the field. The euphoric days of the mid-1980s AI boom were over, various expectations of those times had not been met, and there was continuing concern about an AI "winter." The major challenge of these years was to chart a path for AI, designed and endorsed by the broadest spectrum of AI researchers, that built on past progress, explained AI's capacity for addressing fundamentally important intellectual problems and realistically predicted its potential to contribute to technological challenges of the coming decade. This reflection piece considers these challenges and the ways in which AAAI helped the field to move forward. Adolescence, the twenties, and the forties each bring particular "developmental" challenges to people, and, though surely coincidentally, elements of those life stages seem also to characterize the period of my presidency.


Planning and Acting Together

AI Magazine

People often act together with a shared purpose; they collaborate. Collaboration enables them to work more efficiently and to complete activities they could not accomplish individually. An increasing number of computer applications also require collaboration among various systems and people. Thus, a major challenge for AI researchers is to determine how to construct computer systems that are able to act effectively as partners in collaborative activity. Collaborative activity entails participants forming commitments to achieve the goals of the group activity and requires group decision making and group planning procedures. In addition, agents must be committed to supporting the activities of their fellow participants in support of the group activity. Furthermore, when conflicts arise (for example, from resource bounds), participants must weigh their commitments to various group activities against those for individual activities. This article briefly reviews the major features of one model of collaborative planning called SHARED-PLANS (Grosz and Kraus 1999, 1996). It describes several current efforts to develop collaborative planning agents and systems for human-computer communication based on this model. Finally, it discusses empirical research aimed at determining effective commitment strategies in the SHAREDPLANS context.


Collaborative Systems (AAAI-94 Presidential Address)

AI Magazine

From the scientific perspective, the development of theories and mechanisms to enable building collaborative systems presents exciting research challenges across AI subfields. From the applications perspective, the capability to collaborate with users and other systems is essential if large-scale information systems of the future are to assist users in finding the information they need and solving the problems they have. Key features of collaborative activity are described, the scientific base provided by recent AI research is discussed, and several of the research challenges posed by collaboration are presented. It is further argued that research on, and the development of, collaborative systems should itself be a collaborative endeavor -- within AI, across subfields of computer science, and with researchers in other fields.


Collaborative Systems (AAAI-94 Presidential Address)

AI Magazine

The construction of computer systems that are intelligent, collaborative problem-solving partners is an important goal for both the science of AI and its application. From the scientific perspective, the development of theories and mechanisms to enable building collaborative systems presents exciting research challenges across AI subfields. From the applications perspective, the capability to collaborate with users and other systems is essential if large-scale information systems of the future are to assist users in finding the information they need and solving the problems they have. In this address, it is argued that collaboration must be designed into systems from the start; it cannot be patched on. Key features of collaborative activity are described, the scientific base provided by recent AI research is discussed, and several of the research challenges posed by collaboration are presented. It is further argued that research on, and the development of, collaborative systems should itself be a collaborative endeavor -- within AI, across subfields of computer science, and with researchers in other fields.