Goto

Collaborating Authors

 Monroy-Hernández, Andrés


FairFare: A Tool for Crowdsourcing Rideshare Data to Empower Labor Organizers

arXiv.org Artificial Intelligence

In recent years, labor organizers representing rideshare and delivery workers have advocated for regulations to improve working conditions in the rideshare industry that set wage floors and job loss protections [67]. To call for these improvements, organizers need to understand workers' existing conditions [37], a significant data access and social computing challenge in the rideshare industry. Labor organizers representing rideshare workers typically rely on a collage of qualitative anecdotes and screenshots to provide data about existing working conditions [24]. While these qualitative data provide rich, "thick descriptions" [30] of workers' experience, they are often dismissed by platforms as non-representative, cherry-picked examples. Rideshare platforms, on the other hand, have exclusive access to large-scale, comprehensive quantitative datasets of driver, trip, and pay data that they can draw upon to create authoritative narratives about working conditions in their industry [72]. Labor organizers need comprehensive access to large-scale quantitative data describing working conditions to conduct rigorous, independent investigations and contest platform-driven narratives. There are tools and legal frameworks that empower individual rideshare workers to independently access quantitative work data (e.g., Gridwise and Data Subject Access Requests). However, these tools and frameworks do not provide an intuitive way to aggregate individual worker data into a dataset that provides collective insight into overarching working conditions. Algorithmic auditing scholarship provides methods, like crowdsourcing data, to independently investigate black-boxed systems [66].


QuaLLM: An LLM-based Framework to Extract Quantitative Insights from Online Forums

arXiv.org Artificial Intelligence

LLMs for online text data analysis limits its use and underscores a significant gap in the research landscape. Online discussion forums provide crucial data to understand the Our work addresses this gap through the following contributions: concerns of a wide range of real-world communities. However, the typical qualitative and quantitative methods used to analyze those (i) We introduce QuaLLM, an LLM-based framework consisting data, such as thematic analysis and topic modeling, are infeasible of a novel prompting methodology and evaluation strategy to scale or require significant human effort to translate outputs for the analysis and extraction of quantitative insights from to human readable forms. This study introduces QuaLLM, a novel text data on online forums. LLM-based framework to analyze and extract quantitative insights (ii) We apply our framework to a case study on Reddit's rideshare from text data on online forums. The framework consists of a novel communities, analyzing over one million comments--the prompting methodology and evaluation strategy. We applied this largest study of its kind--to identify worker concerns regarding framework to analyze over one million comments from two Reddit's AI and algorithmic platform decisions, responding to rideshare worker communities, marking the largest study of its regulatory calls [49].


Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application

arXiv.org Artificial Intelligence

Trust is an important factor in people's interactions with AI systems. However, there is a lack of empirical studies examining how real end-users trust or distrust the AI system they interact with. Most research investigates one aspect of trust in lab settings with hypothetical end-users. In this paper, we provide a holistic and nuanced understanding of trust in AI through a qualitative case study of a real-world computer vision application. We report findings from interviews with 20 end-users of a popular, AI-based bird identification app where we inquired about their trust in the app from many angles. We find participants perceived the app as trustworthy and trusted it, but selectively accepted app outputs after engaging in verification behaviors, and decided against app adoption in certain high-stakes scenarios. We also find domain knowledge and context are important factors for trust-related assessment and decision-making. We discuss the implications of our findings and provide recommendations for future research on trust in AI.


"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

arXiv.org Artificial Intelligence

Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.