Goto

Collaborating Authors

 hci



Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review

Pang, Rock Yuren, Schroeder, Hope, Smith, Kynnedy Simone, Barocas, Solon, Xiao, Ziang, Tseng, Emily, Bragg, Danielle

arXiv.org Artificial Intelligence

Large language models (LLMs) have been positioned to revolutionize HCI, by reshaping not only the interfaces, design patterns, and sociotechnical systems that we study, but also the research practices we use. To-date, however, there has been little understanding of LLMs' uptake in HCI. We address this gap via a systematic literature review of 153 CHI papers from 2020-24 that engage with LLMs. We taxonomize: (1) domains where LLMs are applied; (2) roles of LLMs in HCI projects; (3) contribution types; and (4) acknowledged limitations and risks. We find LLM work in 10 diverse domains, primarily via empirical and artifact contributions. Authors use LLMs in five distinct roles, including as research tools or simulated users. Still, authors often raise validity and reproducibility concerns, and overwhelmingly study closed models. We outline opportunities to improve HCI research with and on LLMs, and provide guiding questions for researchers to consider the validity and appropriateness of LLM-related work.


Human-Computer Interaction and Human-AI Collaboration in Advanced Air Mobility: A Comprehensive Review

Sagirli, Fatma Yamac, Zhao, Xiaopeng, Wang, Zhenbo

arXiv.org Artificial Intelligence

The increasing rates of global urbanization and vehicle usage are leading to a shift of mobility to the third dimension-through Advanced Air Mobility (AAM)-offering a promising solution for faster, safer, cleaner, and more efficient transportation. As air transportation continues to evolve with more automated and autonomous systems, advancements in AAM require a deep understanding of human-computer interaction and human-AI collaboration to ensure safe and effective operations in complex urban and regional environments. There has been a significant increase in publications regarding these emerging applications; thus, there is a need to review developments in this area. This paper comprehensively reviews the current state of research on human-computer interaction and human-AI collaboration in AAM. Specifically, we focus on AAM applications related to the design of human-machine interfaces for various uses, including pilot training, air traffic management, and the integration of AI-assisted decision-making systems with immersive technologies such as extended, virtual, mixed, and augmented reality devices. Additionally, we provide a comprehensive analysis of the challenges AAM encounters in integrating human-computer frameworks, including unique challenges associated with these interactions, such as trust in AI systems and safety concerns. Finally, we highlight emerging opportunities and propose future research directions to bridge the gap between human factors and technological advancements in AAM.


The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success

Valdez, André Calero, Heine, Moreen, Franke, Thomas, Jochems, Nicole, Jetter, Hans-Christian, Schrills, Tim

arXiv.org Artificial Intelligence

The evolution of AI is set to profoundly reshape the future. The European Union, recognizing this impending prominence, has enacted the AI Act, regulating market access for AI-based systems. A salient feature of the Act is to guard democratic and humanistic values by focusing regulation on transparency, explainability, and the human ability to understand and control AI systems. Hereby, the EU AI Act does not merely specify technological requirements for AI systems. The EU issues a democratic call for human-centered AI systems and, in turn, an interdisciplinary research agenda for human-centered innovation in AI development. Without robust methods to assess AI systems and their effect on individuals and society, the EU AI Act may lead to repeating the mistakes of the General Data Protection Regulation of the EU and to rushed, chaotic, ad-hoc, and ambiguous implementation, causing more confusion than lending guidance. Moreover, determined research activities in Human-AI interaction will be pivotal for both regulatory compliance and the advancement of AI in a manner that is both ethical and effective. Such an approach will ensure that AI development aligns with human values and needs, fostering a technology landscape that is innovative, responsible, and an integral part of our society.


Charting Ethical Tensions in Multispecies Technology Research through Beneficiary-Epistemology Space

Benford, Steve, Mancini, Clara, Chamberlain, Alan, Schneiders, Eike, Castle-Green, Simon, Fischer, Joel, Kucukyilmaz, Ayse, Salimbeni, Guido, Ngo, Victor, Barnard, Pepita, Adams, Matt, Tandavanitj, Nick, Farr, Ju Row

arXiv.org Artificial Intelligence

While ethical challenges are widely discussed in HCI, far less is reported about the ethical processes that researchers routinely navigate. We reflect on a multispecies project that negotiated an especially complex ethical approval process. Cat Royale was an artist-led exploration of creating an artwork to engage audiences in exploring trust in autonomous systems. The artwork took the form of a robot that played with three cats. Gaining ethical approval required an extensive dialogue with three Institutional Review Boards (IRBs) covering computer science, veterinary science and animal welfare, raising tensions around the welfare of the cats, perceived benefits and appropriate methods, and reputational risk to the University. To reveal these tensions we introduce beneficiary-epistemology space, that makes explicit who benefits from research (humans or animals) and underlying epistemologies. Positioning projects and IRBs in this space can help clarify tensions and highlight opportunities to recruit additional expertise.


Mapping the Challenges of HCI: An Application and Evaluation of ChatGPT and GPT-4 for Mining Insights at Scale

Oppenlaender, Jonas, Hämäläinen, Joonas

arXiv.org Artificial Intelligence

Large language models (LLMs), such as ChatGPT and GPT-4, are gaining wide-spread real world use. Yet, these LLMs are closed source, and little is known about their performance in real-world use cases. In this paper, we apply and evaluate the combination of ChatGPT and GPT-4 for the real-world task of mining insights from a text corpus in order to identify research challenges in the field of HCI. We extract 4,392 research challenges in over 100 topics from the 2023 CHI conference proceedings and visualize the research challenges for interactive exploration. We critically evaluate the LLMs on this practical task and conclude that the combination of ChatGPT and GPT-4 makes an excellent cost-efficient means for analyzing a text corpus at scale. Cost-efficiency is key for flexibly prototyping research ideas and analyzing text corpora from different perspectives, with implications for applying LLMs for mining insights in academia and practice.


UWB Based Static Gesture Classification

Sebastian, Abhishek

arXiv.org Artificial Intelligence

Our paper presents a robust framework for UWB-based static gesture recognition, leveraging proprietary UWB radar sensor technology. Extensive data collection efforts were undertaken to compile datasets containing five commonly used gestures. Our approach involves a comprehensive data pre-processing pipeline that encompasses outlier handling, aspect ratio-preserving resizing, and false-color image transformation. Both CNN and MobileNet models were trained on the processed images. Remarkably, our best-performing model achieved an accuracy of 96.78%. Additionally, we developed a user-friendly GUI framework to assess the model's system resource usage and processing times, which revealed low memory utilization and real-time task completion in under one second. This research marks a significant step towards enhancing static gesture recognition using UWB technology, promising practical applications in various domains.


The Design Space of Generative Models

Morris, Meredith Ringel, Cai, Carrie J., Holbrook, Jess, Kulkarni, Chinmay, Terry, Michael

arXiv.org Artificial Intelligence

Card et al.'s classic paper "The Design Space of Input Devices" [4] established the value of design spaces as a tool for HCI analysis and invention. We posit that developing design spaces for emerging pre-trained, generative AI models is necessary for supporting their integration into human-centered systems and practices. We explore what it means to develop an AI model design space by proposing two design spaces relating to generative AI models: the first considers how HCI can impact generative models (i.e., interfaces for models) and the second considers how generative models can impact HCI (i.e., models as an HCI prototyping material).


The Many Shapes of a Computer Science Career

Communications of the ACM

When you apply for a career in tech, it often means having to decide: Am I a product manager? These roles are all clearly defined with their own sets of mostly nonoverlapping skills to help companies hire the right talent. But that's not the way we define ourselves. Most of us have a variety of skills that don't all neatly fall into one box. We collect skills across multiple dimensions over a lifetime of different experiences.


Navigating Incommensurability Between Ethnomethodology, Conversation Analysis, and AI

#artificialintelligence

Like many research communities, ethnomethodologists and conversation analysts like me have begun to get caught up -- yet again -- in the pervasive spectacle of surging interests in Artificial Intelligence (AI). Inspired by discussions amongst a growing network of researchers in ethnomethodology (EM) and conversation analysis (CA) traditions who nurse such interests, I started thinking about what things EM and the more EM end of conversation analysis might be doing about, for, or even with, fields of AI research. So, this piece is about the disciplinary and conceptual questions that might be encountered, and -- in my view -- may need addressing for engagements with AI research and its affiliates. Although I'm mostly concerned with things to be aware of as well as outright dangers, later on we can think about some opportunities. And throughout I will keep using'we' to talk about EM&CA researchers; but this really is for convenience only -- I don't wish to ventriloquise for our complex research communities. All of the following should be read as emanating from my particular research history, standpoint etc., and treated (hopefully) as an invitation for further discussion amongst EM and CA researchers turning to technology and AI specifically. Why do I feel the need for some caution? Well, we have been here before, most recently in the 1990s with works by Button et al. (1995) and Luff et al. (1990). We need to assess what happened then, and how this relates to now.