Goto

Collaborating Authors

 human-centered ai


Human-Centered AI in Multidisciplinary Medical Discussions: Evaluating the Feasibility of a Chat-Based Approach to Case Assessment

Sawano, Shinnosuke, Kodera, Satoshi

arXiv.org Artificial Intelligence

In this study, we investigate the feasibility of using a human-centered artificial intelligence (AI) chat platform where medical specialists collaboratively assess complex cases. As the target population for this platform, we focus on patients with cardiovascular diseases who are in a state of multimorbidity, that is, suffering from multiple chronic conditions. We evaluate simulated cases with multiple diseases using a chat application by collaborating with physicians to assess feasibility, efficiency gains through AI utilization, and the quantification of discussion content. We constructed simulated cases based on past case reports, medical errors reports and complex cases of cardiovascular diseases experienced by the physicians. The analysis of discussions across five simulated cases demonstrated a significant reduction in the time required for summarization using AI, with an average reduction of 79.98\%. Additionally, we examined hallucination rates in AI-generated summaries used in multidisciplinary medical discussions. The overall hallucination rate ranged from 1.01\% to 5.73\%, with an average of 3.62\%, whereas the harmful hallucination rate varied from 0.00\% to 2.09\%, with an average of 0.49\%. Furthermore, morphological analysis demonstrated that multidisciplinary assessments enabled a more complex and detailed representation of medical knowledge compared with single physician assessments. We examined structural differences between multidisciplinary and single physician assessments using centrality metrics derived from the knowledge graph. In this study, we demonstrated that AI-assisted summarization significantly reduced the time required for medical discussions while maintaining structured knowledge representation. These findings can support the feasibility of AI-assisted chat-based discussions as a human-centered approach to multidisciplinary medical decision-making.


Build Your Own Robot Friend: An Open-Source Learning Module for Accessible and Engaging AI Education

Shi, Zhonghao, O'Connell, Allison, Li, Zongjian, Liu, Siqi, Ayissi, Jennifer, Hoffman, Guy, Soleymani, Mohammad, Matarić, Maja J.

arXiv.org Artificial Intelligence

As artificial intelligence (AI) is playing an increasingly important role in our society and global economy, AI education and literacy have become necessary components in college and K-12 education to prepare students for an AI-powered society. However, current AI curricula have not yet been made accessible and engaging enough for students and schools from all socio-economic backgrounds with different educational goals. In this work, we developed an open-source learning module for college and high school students, which allows students to build their own robot companion from the ground up. This open platform can be used to provide hands-on experience and introductory knowledge about various aspects of AI, including robotics, machine learning (ML), software engineering, and mechanical engineering. Because of the social and personal nature of a socially assistive robot companion, this module also puts a special emphasis on human-centered AI, enabling students to develop a better understanding of human-AI interaction and AI ethics through hands-on learning activities. With open-source documentation, assembling manuals and affordable materials, students from different socio-economic backgrounds can personalize their learning experience based on their individual educational goals. To evaluate the student-perceived quality of our module, we conducted a usability testing workshop with 15 college students recruited from a minority-serving institution. Our results indicate that our AI module is effective, easy-to-follow, and engaging, and it increases student interest in studying AI/ML and robotics in the future. We hope that this work will contribute toward accessible and engaging AI education in human-AI interaction for college and high school students.


Applying HCAI in developing effective human-AI teaming: A perspective from human-AI joint cognitive systems

Xu, Wei, Gao, Zaifeng

arXiv.org Artificial Intelligence

Research and application have used human-AI teaming (HAT) as a new paradigm to develop AI systems. HAT recognizes that AI will function as a teammate instead of simply a tool in collaboration with humans. Effective human-AI teams need to be capable of taking advantage of the unique abilities of both humans and AI while overcoming the known challenges and limitations of each member, augmenting human capabilities, and raising joint performance beyond that of either entity. The National AI Research and Strategic Plan 2023 update has recognized that research programs focusing primarily on the independent performance of AI systems generally fail to consider the functionality that AI must provide within the context of dynamic, adaptive, and collaborative teams and calls for further research on human-AI teaming and collaboration. However, there has been debate about whether AI can work as a teammate with humans. The primary concern is that adopting the "teaming" paradigm contradicts the human-centered AI (HCAI) approach, resulting in humans losing control of AI systems. This article further analyzes the HAT paradigm and the debates. Specifically, we elaborate on our proposed conceptual framework of human-AI joint cognitive systems (HAIJCS) and apply it to represent HAT under the HCAI umbrella. We believe that HAIJCS may help adopt HAI while enabling HCAI. The implications and future work for HAIJCS are also discussed. Insights: AI has led to the emergence of a new form of human-machine relationship: human-AI teaming (HAT), a paradigmatic shift in human-AI systems; We must follow a human-centered AI (HCAI) approach when applying HAT as a new design paradigm; We propose a conceptual framework of human-AI joint cognitive systems (HAIJCS) to represent and implement HAT for developing effective human-AI teaming


The Foundation Model Transparency Index

Bommasani, Rishi, Klyman, Kevin, Longpre, Shayne, Kapoor, Sayash, Maslej, Nestor, Xiong, Betty, Zhang, Daniel, Liang, Percy

arXiv.org Artificial Intelligence

Foundation models have rapidly permeated society, catalyzing a wave of generative AI applications spanning enterprise and consumer-facing contexts. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies (e.g. social media). Reversing this trend is essential: transparency is a vital precondition for public accountability, scientific innovation, and effective governance. To assess the transparency of the foundation model ecosystem and help improve transparency over time, we introduce the Foundation Model Transparency Index. The Foundation Model Transparency Index specifies 100 fine-grained indicators that comprehensively codify transparency for foundation models, spanning the upstream resources used to build a foundation model (e.g data, labor, compute), details about the model itself (e.g. size, capabilities, risks), and the downstream use (e.g. distribution channels, usage policies, affected geographies). We score 10 major foundation model developers (e.g. OpenAI, Google, Meta) against the 100 indicators to assess their transparency. To facilitate and standardize assessment, we score developers in relation to their practices for their flagship foundation model (e.g. GPT-4 for OpenAI, PaLM 2 for Google, Llama 2 for Meta). We present 10 top-level findings about the foundation model ecosystem: for example, no developer currently discloses significant information about the downstream impact of its flagship model, such as the number of users, affected market sectors, or how users can seek redress for harm. Overall, the Foundation Model Transparency Index establishes the level of transparency today to drive progress on foundation model governance via industry standards and regulatory intervention.


User-Centered Design (IX): A "User Experience 3.0" Paradigm Framework in the Intelligence Era

Xu, Wei

arXiv.org Artificial Intelligence

The field of user experience (UX) based on the design philosophy of "user-centered design" is moving towards the intelligence era. Still, the existing UX paradigm mainly aims at non-intelligent systems and lacks a systematic approach to UX for intelligent systems. Throughout the development of UX, the UX paradigm shows the evolution characteristics of the cross-technology era. At present, the intelligence era has put forward new demands on the UX paradigm. For this reason, this paper proposes a "UX 3.0" paradigm framework and the corresponding UX methodology system in the intelligence era. The "UX 3.0" paradigm framework includes five categories of UX methods: ecological experience, innovation-enabled experience, AI-enabled experience, human-AI interaction-based experience, and human-AI collaboration-based experience methods, each providing corresponding multiple UX paradigmatic orientations. The proposal of the "UX 3.0" paradigm helps improve the existing UX methods and provides methodological support for the research and applications of UX in developing intelligent systems. Finally, this paper looks forward to future research and applications of the "UX 3.0" paradigm.


Making AI Fair, and How to Use It

Communications of the ACM

A new technology, broadly deployed, raises profound questions about its impact on American society. Government agencies wonder whether this technology should be used to make automated decisions about Americans. Academic experts call attention to concerns about fairness and accountability. Comments from the public are requested. A White House press conference is announced.


Why You Must Embrace Responsible AI Now

#artificialintelligence

"What we're hearing from our friends and thought leaders in this space that pay close attention to the regulations is just behave as though you're under the European Union's AI Act guidelines, whether you're in Europe, America, or anywhere else," says Roetzer. Regulations like the AI Act will be used as a template by other governments soon. You can't avoid issues around responsible and ethical AI. Regulations will force you to act. Even if you're an AI beginner, you'll quickly run into ethical issues around data, how it's used, and who provides it. You need an AI ethics policy or guidelines.


Participation Interfaces for Human-Centered AI

McGregor, Sean

arXiv.org Artificial Intelligence

Accommodating these stakeholder groups during system design, development, and deployment requires tools for the elicitation of disparate system interests and collaboration interfaces supporting negotiation balancing those interests. This paper introduces interactive visual "participation interfaces" for Markov Decision Processes (MDPs) and collaborative ranking problems as examples restoring a human-centered locus of control. Human-centered design has long been a software design philosophy centered on users [14, 9]. In extending this practice to human-centered artificial intelligence (HCAI), the term "human" sometimes refers to users, but increasingly often the term refers to humanity writ large. As a matter of expediency, practitioners of HCAI distil the collective design target from "humanity" to that of the "stakeholders" affected by the intelligent system in question.


Global AI Ethics Agreement Commits Universities to Human-Centered AI

#artificialintelligence

A new global agreement has been established by eight worldwide universities to commit to the development of human-centered approaches to artificial intelligence (AI). The newest university to join the agreement, which could impact people all across the globe, was the University of Florida (UF).


Why foundation models in AI need to be released responsibly

#artificialintelligence

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate at the Stanford Institute for Human-Centered AI and an associate professor of Computer Science at Stanford University. Humans are not very good at forecasting the future, especially when it comes to technology. Foundation models are a new class of large-scale neural networks with the ability to generate text, audio, video and images. These models will anchor all kinds of applications and hold the power to influence many aspects of society. It's difficult for anyone, even experts, to imagine where this technology will lead in the coming years.