Goto

Collaborating Authors

 human-ai system


Uncalibrated Models Can Improve Human-AI Collaboration

Neural Information Processing Systems

In many practical applications of AI, an AI model is used as a decision aid for human users. The AI provides advice that a human (sometimes) incorporates into their decision-making process. The AI advice is often presented with some measure of "confidence" that the human can use to calibrate how much they


SPHERE: An Evaluation Card for Human-AI Systems

Ma, Qianou, Zhao, Dora, Zhao, Xinran, Si, Chenglei, Yang, Chenyang, Louie, Ryan, Reiter, Ehud, Yang, Diyi, Wu, Tongshuang

arXiv.org Artificial Intelligence

In the era of Large Language Models (LLMs), establishing effective evaluation methods and standards for diverse human-AI interaction systems is increasingly challenging. To encourage more transparent documentation and facilitate discussion on human-AI system evaluation design options, we present an evaluation card SPHERE, which encompasses five key dimensions: 1) What is being evaluated?; 2) How is the evaluation conducted?; 3) Who is participating in the evaluation?; 4) When is evaluation conducted?; 5) How is evaluation validated? We conduct a review of 39 human-AI systems using SPHERE, outlining current evaluation practices and areas for improvement. We provide three recommendations for improving the validity and rigor of evaluation practices.


When Are Combinations of Humans and AI Useful?

Vaccaro, Michelle, Almaatouq, Abdullah, Malone, Thomas

arXiv.org Artificial Intelligence

People increasingly work with artificial intelligence (AI) tools in fields including medicine, finance, and law, as well as in daily activities such as traveling, shopping, and communicating. These human-AI systems have tremendous potential given the complementary nature of humans and AI - the general intelligence of humans allows us to reason about diverse problems, and the computational power of AI systems allows them to accomplish specific tasks that people find difficult. In fact, a large body of work suggests that integrating human creativity, intuition, and contextual understanding with AI's speed, scalability, and analytical power can lead to innovative solutions and improved decision-making in areas such as healthcare [1], customer service [2], and scientific research [3]. On the other hand, a growing number of studies reveal that human-AI systems do not necessarily achieve better results when compared to the best of humans or AI alone. Challenges such as communication barriers, trust issues, ethical concerns, and the need for effective coordination between humans and AI systems can hinder the collaborative process [4-9]. These seemingly contradictory results raise important questions: When do humans and AI complement each other?


Deconstructing Human-AI Collaboration: Agency, Interaction, and Adaptation

Holter, Steffen, El-Assady, Mennatallah

arXiv.org Artificial Intelligence

As full AI-based automation remains out of reach in most real-world applications, the focus has instead shifted to leveraging the strengths of both human and AI agents, creating effective collaborative systems. The rapid advances in this area have yielded increasingly more complex systems and frameworks, while the nuance of their characterization has gotten more vague. Similarly, the existing conceptual models no longer capture the elaborate processes of these systems nor describe the entire scope of their collaboration paradigms. In this paper, we propose a new unified set of dimensions through which to analyze and describe human-AI systems. Our conceptual model is centered around three high-level aspects - agency, interaction, and adaptation - and is developed through a multi-step process. Firstly, an initial design space is proposed by surveying the literature and consolidating existing definitions and conceptual frameworks. Secondly, this model is iteratively refined and validated by conducting semi-structured interviews with nine researchers in this field. Lastly, to illustrate the applicability of our design space, we utilize it to provide a structured description of selected human-AI systems.


An HCAI Methodological Framework: Putting It Into Action to Enable Human-Centered AI

Xu, Wei, Gao, Zaifeng, Dainoff, Marvin

arXiv.org Artificial Intelligence

Human-centered AI (HCAI), as a design philosophy, advocates prioritizing humans in designing, developing, and deploying intelligent systems, aiming to maximize the benefits of AI technology to humans and avoid its potential adverse effects. While HCAI has gained momentum, the lack of guidance on methodology in its implementation makes its adoption challenging. After assessing the needs for a methodological framework for HCAI, this paper first proposes a comprehensive and interdisciplinary HCAI methodological framework integrated with seven components, including design goals, design principles, implementation approaches, design paradigms, interdisciplinary teams, methods, and processes. THe implications of the framework are also discussed. This paper also presents a "three-layer" approach to facilitate the implementation of the framework. We believe the proposed framework is systematic and executable, which can overcome the weaknesses in current frameworks and the challenges currently faced in implementing HCAI. Thus, the framework can help put it into action to develop, transfer, and implement HCAI in practice, eventually enabling the design, development, and deployment of HCAI-based intelligent systems.


Applying HCAI in developing effective human-AI teaming: A perspective from human-AI joint cognitive systems

Xu, Wei, Gao, Zaifeng

arXiv.org Artificial Intelligence

Research and application have used human-AI teaming (HAT) as a new paradigm to develop AI systems. HAT recognizes that AI will function as a teammate instead of simply a tool in collaboration with humans. Effective human-AI teams need to be capable of taking advantage of the unique abilities of both humans and AI while overcoming the known challenges and limitations of each member, augmenting human capabilities, and raising joint performance beyond that of either entity. The National AI Research and Strategic Plan 2023 update has recognized that research programs focusing primarily on the independent performance of AI systems generally fail to consider the functionality that AI must provide within the context of dynamic, adaptive, and collaborative teams and calls for further research on human-AI teaming and collaboration. However, there has been debate about whether AI can work as a teammate with humans. The primary concern is that adopting the "teaming" paradigm contradicts the human-centered AI (HCAI) approach, resulting in humans losing control of AI systems. This article further analyzes the HAT paradigm and the debates. Specifically, we elaborate on our proposed conceptual framework of human-AI joint cognitive systems (HAIJCS) and apply it to represent HAT under the HCAI umbrella. We believe that HAIJCS may help adopt HAI while enabling HCAI. The implications and future work for HAIJCS are also discussed. Insights: AI has led to the emergence of a new form of human-machine relationship: human-AI teaming (HAT), a paradigmatic shift in human-AI systems; We must follow a human-centered AI (HCAI) approach when applying HAT as a new design paradigm; We propose a conceptual framework of human-AI joint cognitive systems (HAIJCS) to represent and implement HAT for developing effective human-AI teaming


Uncalibrated Models Can Improve Human-AI Collaboration

Vodrahalli, Kailas, Gerstenberg, Tobias, Zou, James

arXiv.org Artificial Intelligence

In many practical applications of AI, an AI model is used as a decision aid for human users. The AI provides advice that a human (sometimes) incorporates into their decision-making process. The AI advice is often presented with some measure of "confidence" that the human can use to calibrate how much they depend on or trust the advice. In this paper, we demonstrate that presenting AI models as more confident than they actually are, even when the original AI is well-calibrated, can improve human-AI performance (measured as the accuracy and confidence of the human's final prediction after seeing the AI advice). We first learn a model for how humans incorporate AI advice using data from thousands of human interactions. This enables us to explicitly estimate how to transform the AI's prediction confidence, making the AI uncalibrated, in order to improve the final human prediction. We empirically validate our results across four different tasks -- dealing with images, text and tabular data -- involving hundreds of human participants. We further support our findings with simulation analysis. Our findings suggest the importance of and a framework for jointly optimizing the human-AI system as opposed to the standard paradigm of optimizing the AI model alone.


Meaningful human control over AI systems: beyond talking the talk

Siebert, Luciano Cavalcante, Lupetti, Maria Luce, Aizenberg, Evgeni, Beckers, Niek, Zgonnikov, Arkady, Veluwenkamp, Herman, Abbink, David, Giaccardi, Elisa, Houben, Geert-Jan, Jonker, Catholijn M., Hoven, Jeroen van den, Forster, Deborah, Lagendijk, Reginald L.

arXiv.org Artificial Intelligence

The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans (e.g., users, designers and developers, manufacturers, legislators). However, the relevant discussions around meaningful human control have so far not resulted in clear requirements for researchers, designers, and engineers. As a result, there is no consensus on how to assess whether a designed AI system is under meaningful human control, making the practical development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying four actionable properties which AI-based systems must have to be under meaningful human control. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue these four properties are necessary for AI systems under meaningful human control, and provide possible directions to incorporate them into practice. We illustrate these properties with two use cases, automated vehicle and AI-based hiring. We believe these four properties will support practically-minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control and responsibility.


Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction

Chu, Eric, Roy, Deb, Andreas, Jacob

arXiv.org Artificial Intelligence

We present a randomized controlled trial for a model-in-the-loop regression task, with the goal of measuring the extent to which (1) good explanations of model predictions increase human accuracy, and (2) faulty explanations decrease human trust in the model. We study explanations based on visual saliency in an image-based age prediction task for which humans and learned models are individually capable but not highly proficient and frequently disagree. Our experimental design separates model quality from explanation quality, and makes it possible to compare treatments involving a variety of explanations of varying levels of quality. We find that presenting model predictions improves human accuracy. However, visual explanations of various kinds fail to significantly alter human accuracy or trust in the model - regardless of whether explanations characterize an accurate model, an inaccurate one, or are generated randomly and independently of the input image. These findings suggest the need for greater evaluation of explanations in downstream decision making tasks, better design-based tools for presenting explanations to users, and better approaches for generating explanations.