Goto

Collaborating Authors

 community member



Do You Trust the Process?: Modeling Institutional Trust for Community Adoption of Reinforcement Learning Policies

Balepur, Naina, Pei, Xingrui, Sundaram, Hari

arXiv.org Artificial Intelligence

Many governmental bodies are adopting AI policies for decision-making. In particular, Reinforcement Learning has been used to design policies that citizens would be expected to follow if implemented. Much RL work assumes that citizens follow these policies, and evaluate them with this in mind. However, we know from prior work that without institutional trust, citizens will not follow policies put in place by governments. In this work, we develop a trust-aware RL algorithm for resource allocation in communities. We consider the case of humanitarian engineering, where the organization is aiming to distribute some technology or resource to community members. We use a Deep Deterministic Policy Gradient approach to learn a resource allocation that fits the needs of the organization. Then, we simulate resource allocation according to the learned policy, and model the changes in institutional trust of community members. We investigate how this incorporation of institutional trust affects outcomes, and ask how effectively an organization can learn policies if trust values are private. We find that incorporating trust into RL algorithms can lead to more successful policies, specifically when the organization's goals are less certain. We find more conservative trust estimates lead to increased fairness and average community trust, though organization success suffers. Finally, we explore a strategy to prevent unfair outcomes to communities. We implement a quota system by an external entity which decreases the organization's utility when it does not serve enough community members. We find this intervention can improve fairness and trust among communities in some cases, while decreasing the success of the organization. This work underscores the importance of institutional trust in algorithm design and implementation, and identifies a tension between organization success and community well-being.



Human-AI Narrative Synthesis to Foster Shared Understanding in Civic Decision-Making

Overney, Cassandra, Jiang, Hang, Haider, Urooj, Moe, Cassandra, Mangat, Jasmine, Pantano, Frank, McMillian, Effie G., Riggins, Paul, Gillani, Nabeel

arXiv.org Artificial Intelligence

Community engagement processes in representative political contexts, like school districts, generate massive volumes of feedback that overwhelm traditional synthesis methods, creating barriers to shared understanding not only between civic leaders and constituents but also among community members. To address these barriers, we developed StoryBuilder, a human-AI collaborative pipeline that transforms community input into accessible first-person narratives. Using 2,480 community responses from an ongoing school rezoning process, we generated 124 composite stories and deployed them through a mobile-friendly StorySharer interface. Our mixed-methods evaluation combined a four-month field deployment, user studies with 21 community members, and a controlled experiment examining how narrative composition affects participant reactions. Field results demonstrate that narratives helped community members relate across diverse perspectives. In the experiment, experience-grounded narratives generated greater respect and trust than opinion-heavy narratives. We contribute a human-AI narrative synthesis system and insights on its varied acceptance and effectiveness in a real-world civic context.


LLMs for Resource Allocation: A Participatory Budgeting Approach to Inferring Preferences

Damle, Sankarshan, Faltings, Boi

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are increasingly expected to handle complex decision-making tasks, yet their ability to perform structured resource allocation remains underexplored. Evaluating their reasoning is also difficult due to data contamination and the static nature of existing benchmarks. We present a dual-purpose framework leveraging Participatory Budgeting (PB) both as (i) a practical setting for LLM-based resource allocation and (ii) an adaptive benchmark for evaluating their reasoning capabilities. We task LLMs with selecting project subsets under feasibility (e.g., budget) constraints via three prompting strategies: greedy selection, direct optimization, and a hill-climbing-inspired refinement. We benchmark LLMs' allocations against a utility-maximizing oracle. Interestingly, we also test whether LLMs can infer structured preferences from natural-language voter input or metadata, without explicit votes. By comparing allocations based on inferred preferences to those from ground-truth votes, we evaluate LLMs' ability to extract preferences from open-ended input. Our results underscore the role of prompt design and show that LLMs hold promise for mechanism design with unstructured inputs.


Developing a Mixed-Methods Pipeline for Community-Oriented Digitization of Kwak'wala Legacy Texts

Agarwal, Milind, Rosenblum, Daisy, Anastasopoulos, Antonios

arXiv.org Artificial Intelligence

Kwak'wala is an Indigenous language spoken in British Columbia, with a rich legacy of published documentation spanning more than a century, and an active community of speakers, teachers, and learners engaged in language revitalization. Over 11 volumes of the earliest texts created during the collaboration between Franz Boas and George Hunt have been scanned but remain unreadable by machines. Complete digitization through optical character recognition has the potential to facilitate transliteration into modern orthographies and the creation of other language technologies. In this paper, we apply the latest OCR techniques to a series of Kwak'wala texts only accessible as images, and discuss the challenges and unique adaptations necessary to make such technologies work for these real-world texts. Building on previous methods, we propose using a mix of off-the-shelf OCR methods, language identification, and masking to effectively isolate Kwak'wala text, along with post-correction models, to produce a final high-quality transcription.


AI and the Future of Digital Public Squares

Goldberg, Beth, Acosta-Navas, Diana, Bakker, Michiel, Beacock, Ian, Botvinick, Matt, Buch, Prateek, DiResta, Renée, Donthi, Nandika, Fast, Nathanael, Iyer, Ravi, Jalan, Zaria, Konya, Andrew, Danciu, Grace Kwak, Landemore, Hélène, Marwick, Alice, Miller, Carl, Ovadya, Aviv, Saltz, Emily, Schirch, Lisa, Shalom, Dalit, Siddarth, Divya, Sieker, Felix, Small, Christopher, Stray, Jonathan, Tang, Audrey, Tessler, Michael Henry, Zhang, Amy

arXiv.org Artificial Intelligence

Two substantial technological advances have reshaped the public square in recent decades: first with the advent of the internet and second with the recent introduction of large language models (LLMs). LLMs offer opportunities for a paradigm shift towards more decentralized, participatory online spaces that can be used to facilitate deliberative dialogues at scale, but also create risks of exacerbating societal schisms. Here, we explore four applications of LLMs to improve digital public squares: collective dialogue systems, bridging systems, community moderation, and proof-of-humanity systems. Building on the input from over 70 civil society experts and technologists, we argue that LLMs both afford promising opportunities to shift the paradigm for conversations at scale and pose distinct risks for digital public squares. We lay out an agenda for future research and investments in AI that will strengthen digital public squares and safeguard against potential misuses of AI.


Provocation: Who benefits from "inclusion" in Generative AI?

Dalal, Samantha, Hall, Siobhan Mackenzie, Johnson, Nari

arXiv.org Artificial Intelligence

The demands for accurate and representative generative AI systems means there is an increased demand on participatory evaluation structures. While these participatory structures are paramount to to ensure non-dominant values, knowledge and material culture are also reflected in AI models and the media they generate, we argue that dominant structures of community participation in AI development and evaluation are not explicit enough about the benefits and harms that members of socially marginalized groups may experience as a result of their participation. Without explicit interrogation of these benefits by AI developers, as a community we may remain blind to the immensity of systemic change that is needed as well. To support this provocation, we present a speculative case study, developed from our own collective experiences as AI researchers. We use this speculative context to itemize the barriers that need to be overcome in order for the proposed benefits to marginalized communities to be realized, and harms mitigated.


A Capabilities Approach to Studying Bias and Harm in Language Technologies

Nigatu, Hellina Hailu, Talat, Zeerak

arXiv.org Artificial Intelligence

In moving from excluding the majority of the world's languages to blindly adopting what we make for English, we first risk importing the same harms we have at best mitigated and at least measured for English. For instance, Yong et al. [15] showed how prompting GPT-4 in low-resource languages circumvents guardrails that are effective in English. However, in evaluating and mitigating harms arising from adopting new technologies into such contexts, we often disregard (1) the actual community needs of Language Technologies, and (2) biases and fairness issues within the context of the communities. Here, we consider fairness, bias, and inclusion in Language Technologies through the lens of the Capabilities Approach [12]. The Capabilities Approach centers what people are capable of achieving, given their intersectional social, political, and economic contexts instead of what resources are (theoretically) available to them. In the following sections, we detail the Capabilities Approach, its relationship to multilingual and multicultural evaluation, and how the framework affords meaningful collaboration with community members in defining and measuring harms of Language Technologies. 2 THE CAPABILITIES APPROACH The Capabilities Approach is a framework in developmental economic studies proposed by Amartya Sen in a series of articles published as far back as 1974 [1]. It has been applied to varied fields including environmental justice [e.g.


Developing a system for real-time sensing of flooded roads

AIHub

Roadway-related incidents are a leading cause of flood fatalities nationwide, but limited flood-reporting tools make it difficult to evaluate road conditions in real time. Existing tools -- traffic cameras, water-level sensors and even social media data -- can provide observations of flooding, but they are often not primarily designed for sensing flood conditions on roads and do not work in conjunction. A network of sensors could improve situational flood level awareness; however, they are expensive to operate at scale. Engineers at Rice University have developed a possible solution to this problem: an automated data fusion framework called OpenSafe Fusion. Short for Open Source Situational Awareness Framework for Mobility using Data Fusion, OpenSafe Fusion leverages existing individual reporting mechanisms and public data sources to sense quickly evolving road conditions during urban flooding events, which are becoming increasingly frequent.