Lee, Min Kyung
GuideLLM: Exploring LLM-Guided Conversation with Applications in Autobiography Interviewing
Duan, Jinhao, Zhao, Xinyu, Zhang, Zhuoxuan, Ko, Eunhye, Boddy, Lily, Wang, Chenan, Li, Tianhao, Rasgon, Alexander, Hong, Junyuan, Lee, Min Kyung, Yuan, Chenxi, Long, Qi, Ding, Ying, Chen, Tianlong, Xu, Kaidi
Although Large Language Models (LLMs) succeed in human-guided conversations such as instruction following and question answering, the potential of LLM-guided conversations-where LLMs direct the discourse and steer the conversation's objectives-remains under-explored. In this study, we first characterize LLM-guided conversation into three fundamental components: (i) Goal Navigation; (ii) Context Management; (iii) Empathetic Engagement, and propose GuideLLM as an installation. We then implement an interviewing environment for the evaluation of LLM-guided conversation. Specifically, various topics are involved in this environment for comprehensive interviewing evaluation, resulting in around 1.4k turns of utterances, 184k tokens, and over 200 events mentioned during the interviewing for each chatbot evaluation. We compare GuideLLM with 6 state-of-the-art LLMs such as GPT-4o and Llama-3-70b-Instruct, from the perspective of interviewing quality, and autobiography generation quality. For automatic evaluation, we derive user proxies from multiple autobiographies and employ LLM-as-a-judge to score LLM behaviors. We further conduct a human-involved experiment by employing 45 human participants to chat with GuideLLM and baselines. We then collect human feedback, preferences, and ratings regarding the qualities of conversation and autobiography. Experimental results indicate that GuideLLM significantly outperforms baseline LLMs in automatic evaluation and achieves consistent leading performances in human ratings.
The AI-DEC: A Card-based Design Method for User-centered AI Explanations
Lee, Christine P, Lee, Min Kyung, Mutlu, Bilge
Increasing evidence suggests that many deployed AI systems do not sufficiently support end-user interaction and information needs. Engaging end-users in the design of these systems can reveal user needs and expectations, yet effective ways of engaging end-users in the AI explanation design remain under-explored. To address this gap, we developed a design method, called AI-DEC, that defines four dimensions of AI explanations that are critical for the integration of AI systems -- communication content, modality, frequency, and direction -- and offers design examples for end-users to design AI explanations that meet their needs. We evaluated this method through co-design sessions with workers in healthcare, finance, and management industries who regularly use AI systems in their daily work. Findings indicate that the AI-DEC effectively supported workers in designing explanations that accommodated diverse levels of performance and autonomy needs, which varied depending on the AI system's workplace role and worker values. We discuss the implications of using the AI-DEC for the user-centered design of AI explanations in real-world systems.
Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI
Liu, Houjiang, Das, Anubrata, Boltz, Alexander, Zhou, Didi, Pinaroc, Daisy, Lease, Matthew, Lee, Min Kyung
A key challenge in professional fact-checking is its limited scalability in relation to the magnitude of false information. While many Natural Language Processing (NLP) tools have been proposed to enhance fact-checking efficiency and scalability, both academic research and fact-checking organizations report limited adoption of such tooling due to insufficient alignment with fact-checker practices, values, and needs. To address this gap, we investigate a co-design method, Matchmaking for AI, which facilitates fact-checkers, designers, and NLP researchers to collaboratively discover what fact-checker needs should be addressed by technology and how. Our co-design sessions with 22 professional fact-checkers yielded a set of 11 novel design ideas. They assist in information searching, processing, and writing tasks for efficient and personalized fact-checking; help fact-checkers proactively prepare for future misinformation; monitor their potential biases; and support internal organization collaboration. Our work offers implications for human-centered fact-checking research and practice and AI co-design research.
Learning Complementary Policies for Human-AI Teams
Gao, Ruijiang, Saar-Tsechansky, Maytal, De-Arteaga, Maria, Han, Ligong, Sun, Wei, Lee, Min Kyung, Lease, Matthew
Human-AI complementarity is important when neither the algorithm nor the human yields dominant performance across all instances in a given context. Recent work that explored human-AI collaboration has considered decisions that correspond to classification tasks. However, in many important contexts where humans can benefit from AI complementarity, humans undertake course of action. In this paper, we propose a framework for a novel human-AI collaboration for selecting advantageous course of action, which we refer to as Learning Complementary Policy for Human-AI teams (\textsc{lcp-hai}). Our solution aims to exploit the human-AI complementarity to maximize decision rewards by learning both an algorithmic policy that aims to complement humans by a routing model that defers decisions to either a human or the AI to leverage the resulting complementarity. We then extend our approach to leverage opportunities and mitigate risks that arise in important contexts in practice: 1) when a team is composed of multiple humans with differential and potentially complementary abilities, 2) when the observational data includes consistent deterministic actions, and 3) when the covariate distribution of future decisions differ from that in the historical data. We demonstrate the effectiveness of our proposed methods using data on real human responses and semi-synthetic, and find that our methods offer reliable and advantageous performance across setting, and that it is superior to when either the algorithm or the AI make decisions on their own. We also find that the extensions we propose effectively improve the robustness of the human-AI collaboration performance in the presence of different challenging settings.
Believable Robot Characters
Simmons, Reid (Carnegie Mellon University) | Makatchev, Maxim (Carnegie Mellon University) | Kirby, Rachel (Carnegie Mellon University) | Lee, Min Kyung (Carnegie Mellon University) | Fanaswala, Imran (Carnegie Mellon University in Qatar) | Browning, Brett (Carnegie Mellon University) | Forlizzi, Jodi (Carnegie Mellon University) | Sakr, Majd (Carnegie Mellon University in Qatar)
Believability of characters has been an objective in literature, theater, film, and animation. We argue that believable robot characters are important in human-robot interaction, as well. In particular, we contend that believable characters evoke users' social responses that, for some tasks, lead to more natural interactions and are associated with improved task performance. In a dialogue-capable robot, a key to such believability is the integration of a consistent storyline, verbal and nonverbal behaviors, and sociocultural context.
Believable Robot Characters
Simmons, Reid (Carnegie Mellon University) | Makatchev, Maxim (Carnegie Mellon University) | Kirby, Rachel (Carnegie Mellon University) | Lee, Min Kyung (Carnegie Mellon University) | Fanaswala, Imran (Carnegie Mellon University in Qatar) | Browning, Brett (Carnegie Mellon University) | Forlizzi, Jodi (Carnegie Mellon University) | Sakr, Majd (Carnegie Mellon University in Qatar)
Believability of characters has been an objective in literature, theater, film, and animation. We argue that believable robot characters are important in human-robot interaction, as well. In particular, we contend that believable characters evoke users’ social responses that, for some tasks, lead to more natural interactions and are associated with improved task performance. In a dialogue-capable robot, a key to such believability is the integration of a consistent storyline, verbal and nonverbal behaviors, and sociocultural context. We describe our work in this area and present empirical results from three robot receptionist testbeds that operate "in the wild."