Yang, Stephen
The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice
Delgado, Fernando, Yang, Stephen, Madaio, Michael, Yang, Qian
Despite the growing consensus that stakeholders affected by AI systems should participate in their design, enormous variation and implicit disagreements exist among current approaches. For researchers and practitioners who are interested in taking a participatory approach to AI design and development, it remains challenging to assess the extent to which any participatory approach grants substantive agency to stakeholders. This article thus aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation and through empirical investigation and critique of its current practices. Specifically, we derive a conceptual framework through synthesis of literature across technology design, political theory, and the social sciences that researchers and practitioners can leverage to evaluate approaches to participation in AI design. Additionally, we articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners. We use these empirical findings to understand the current state of participatory practice and subsequently provide guidance to better align participatory goals and methods in a way that accounts for practical constraints.
Predictive Patentomics: Forecasting Innovation Success and Valuation with ChatGPT
Yang, Stephen
Analysis of innovation has been fundamentally limited by conventional approaches to broad, structural variables. This paper pushes the boundaries, taking an LLM approach to patent analysis with the groundbreaking ChatGPT technology. OpenAI's state-of-the-art textual embedding accesses complex information about the quality and impact of each invention to power deep learning predictive models. The nuanced embedding drives a 24% incremental improvement in R-squared predicting patent value and clearly isolates the worst and best applications. These models enable a revision of the contemporary Kogan, Papanikolaou, Seru, and Stoffman (2017) valuation of patents by a median deviation of 1.5 times, accounting for potential institutional predictions. Furthermore, the market fails to incorporate timely information about applications; a long-short portfolio based on predicted acceptance rates achieves significant abnormal returns of 3.3% annually. The models provide an opportunity to revolutionize startup and small-firm corporate policy vis-a-vis patenting.
Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and Stir"
Delgado, Fernando, Yang, Stephen, Madaio, Michael, Yang, Qian
There is a growing consensus in HCI and AI research that the design of AI systems needs to engage and empower stakeholders who will be affected by AI. However, the manner in which stakeholders should participate in AI design is unclear. This workshop paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices via a survey of recent published research and a dozen semi-structured interviews with AI researchers and practitioners. Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design and articulates a set of empirical findings that in ensemble detail out the contemporary landscape of participatory practice in AI design. These findings can help bootstrap a more principled discussion on how PD of AI should move forward across AI, HCI, and other research communities.
Clinical Evidence Engine: Proof-of-Concept For A Clinical-Domain-Agnostic Decision Support Infrastructure
Hou, Bojian, Zhang, Hao, Ladizhinsky, Gur, Ladizhinsky, Gur, Yang, Stephen, Kuleshov, Volodymyr, Wang, Fei, Yang, Qian
Abstruse learning algorithms and complex datasets increasingly characterize modern clinical decision support systems (CDSS). As a result, clinicians cannot easily or rapidly scrutinize the CDSS recommendation when facing a difficult diagnosis or treatment decision in practice. Over-trust or under-trust are frequent. Prior research has explored supporting such assessments by explaining DST data inputs and algorithmic mechanisms. This paper explores a different approach: Providing precisely relevant, scientific evidence from biomedical literature. We present a proof-of-concept system, Clinical Evidence Engine, to demonstrate the technical and design feasibility of this approach across three domains (cardiovascular diseases, autism, cancer). Leveraging Clinical BioBERT, the system can effectively identify clinical trial reports based on lengthy clinical questions (e.g., "risks of catheter infection among adult patients in intensive care unit who require arterial catheters, if treated with povidone iodine-alcohol"). This capability enables the system to identify clinical trials relevant to diagnostic/treatment hypotheses -- a clinician's or a CDSS's. Further, Clinical Evidence Engine can identify key parts of a clinical trial abstract, including patient population (e.g., adult patients in intensive care unit who require arterial catheters), intervention (povidone iodine-alcohol), and outcome (risks of catheter infection). This capability opens up the possibility of enabling clinicians to 1) rapidly determine the match between a clinical trial and a clinical question, and 2) understand the result and contexts of the trial without extensive reading. We demonstrate this potential by illustrating two example use scenarios of the system. We discuss the idea of designing DST explanations not as specific to a DST or an algorithm, but as a domain-agnostic decision support infrastructure.
Initiating Interactions and Negotiating Approach: A Robotic Trash Can in the Field
Fischer, Kerstin (University of Southern Denmark) | Yang, Stephen (Stanford University) | Mok, Brian (Stanford University) | Maheshwari, Rohan (Stanford University) | Sirkin, David (Stanford University) | Ju, Wendy (Stanford University)
In this study, we address how people respond to a robotic trashcan initiating interactions and offering its service. We show that considerable coordination and negotiation work takes place both between human and robot and between humans who are involved in a joint activity when the robot approaches. While in this scenario attention getting was no problem, the interactions posed significant problems to people who did not want the robot’s service. The unwillingness to interact with the robot was mostly communicated by withholding social signals, which means that human-robot interaction designers not only need to build in ways to respond to human social signals in a timely and appropriate manner, but also a representation of what kinds of signals could be expected in order to interpret the ostensive lack of such signals adequately.
Every Tool in Its Place: Interaction and Collaboration with Robotic Drawers
Mok, Brian Ka-Jun (Stanford University) | Yang, Stephen (Stanford University) | Sirkin, David (Stanford University) | Ju, Wendy (Stanford University)
In this study, we examined how participants (N = 20) interacted and collaborated with a set of robotic drawers to accomplish a building task. The drawers’ behavior varied along two variables — proactive/reactive and expressive/nonexpressive motions. The results of our study indicated that participants considered an expressive robot to be more involved and interested in the interaction. They also found that while proactive or expressive robots could dominate the interaction, proactivity might negatively affect the participants’ perception of their social status relative to that of the robot’s, while expressiveness did not. This shows the importance of utilizing expressive movements when designing robots that collaborate with human users.