Goto

Collaborating Authors

 consideration




0561bc7ecba98e39ca7994f93311ba23-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for thoughtful feedback. "researchers working on pairwise comparisons and preference learning should find this paper to be interesting and Furthermore, we note that we also plan to make our code available as soon as the review period concludes. In our derivation, we pose the problem in a noiseless environment only for simplicity. For similar reasons, we also did not compare our method against algorithms utilizing different models of preference. As with any recommender system, practical considerations are important.


Online Partitioned Local Depth for semi-supervised applications

Foley, John D., Lee, Justin T.

arXiv.org Machine Learning

We introduce an extension of the partitioned local depth (PaLD) algorithm that is adapted to online applications such as semi-supervised prediction. The new algorithm we present, online PaLD, is well-suited to situations where it is a possible to pre-compute a cohesion network from a reference dataset. After $O(n^3)$ steps to construct a queryable data structure, online PaLD can extend the cohesion network to a new data point in $O(n^2)$ time. Our approach complements previous speed up approaches based on approximation and parallelism. For illustrations, we present applications to online anomaly detection and semi-supervised classification for health-care datasets.


Ethics Readiness of Artificial Intelligence: A Practical Evaluation Method

Adomaitis, Laurynas, Israel-Jost, Vincent, Grinbaum, Alexei

arXiv.org Artificial Intelligence

In the governance of emerging technologies, ethical guidance has often relied on so-called soft law instruments--codes of conduct, guidelines, or frameworks--designed to promote responsible behavior without imposing binding legal constraints. This is partly due to the difficulty of imposing harmonized regulations across the EU, especially in a global context characterized by strong reservations expressed by other international actors, e.g. the United States of America, with regard to the regulation of artificial intelligence (AI) that "unduly burdens AI innovation" (Kratsios, Sacks, and Rubio 2025) . Another reason is related to the principle, upheld in several member states such as Germany, that protects scientific freedom by constitutional law. Nevertheless, the recent trajectory of technological regulation in the European Union shows that soft law can evolve into hard law: this has been the case, notably, with the adoption of the AI Act (European Commission 2022; Terpan 2015) .


The Gender Code: Gendering the Global Governance of Artificial Intelligence

Cupac, Jelena

arXiv.org Artificial Intelligence

This paper examines how international AI governance frameworks address gender issues and gender-based harms. The analysis covers binding regulations, such as the EU AI Act; soft law instruments, like the UNESCO Recommendations on AI Ethics; and global initiatives, such as the Global Partnership on AI (GPAI). These instruments reveal emerging trends, including the integration of gender concerns into broader human rights frameworks, a shift toward explicit gender-related provisions, and a growing emphasis on inclusivity and diversity. Yet, some critical gaps persist, including inconsistent treatment of gender across governance documents, limited engagement with intersectionality, and a lack of robust enforcement mechanisms. However, this paper argues that effective AI governance must be intersectional, enforceable, and inclusive. This is key to moving beyond tokenism toward meaningful equity and preventing reinforcement of existing inequalities. The study contributes to ethical AI debates by highlighting the importance of gender-sensitive governance in building a just technological future.


The Loss of Control Playbook: Degrees, Dynamics, and Preparedness

Stix, Charlotte, Hallensleben, Annika, Ortega, Alejandro, Pistillo, Matteo

arXiv.org Artificial Intelligence

This research report addresses the absence of an actionable definition for Loss of Control (LoC) in AI systems by developing a novel taxonomy and preparedness framework. Despite increasing policy and research attention, existing LoC definitions vary significantly in scope and timeline, hindering effective LoC assessment and mitigation. To address this issue, we draw from an extensive literature review and propose a graded LoC taxonomy, based on the metrics of severity and persistence, that distinguishes between Deviation, Bounded LoC, and Strict LoC. We model pathways toward a societal state of vulnerability in which sufficiently advanced AI systems have acquired or could acquire the means to cause Bounded or Strict LoC once a catalyst, either misalignment or pure malfunction, materializes. We argue that this state becomes increasingly likely over time, absent strategic intervention, and propose a strategy to avoid reaching a state of vulnerability. Rather than focusing solely on intervening on AI capabilities and propensities potentially relevant for LoC or on preventing potential catalysts, we introduce a complementary framework that emphasizes three extrinsic factors: Deployment context, Affordances, and Permissions (the DAP framework). Compared to work on intrinsic factors and catalysts, this framework has the unfair advantage of being actionable today. Finally, we put forward a plan to maintain preparedness and prevent the occurrence of LoC outcomes should a state of societal vulnerability be reached, focusing on governance measures (threat modeling, deployment policies, emergency response) and technical controls (pre-deployment testing, control measures, monitoring) that could maintain a condition of perennial suspension.


Simulating Life Paths with Digital Twins: AI-Generated Future Selves Influence Decision-Making and Expand Human Choice

Poonsiriwong, Rachel, Archiwaranguprok, Chayapatr, Albrecht, Constanze, Yin, Peggy, Powdthavee, Nattavudh, Hershfield, Hal, Lertsutthiwong, Monchai, Winson, Kavin, Pataranutaporn, Pat

arXiv.org Artificial Intelligence

Major life transitions demand high-stakes decisions, yet people often struggle to imagine how their future selves will live with the consequences. To support this limited capacity for mental time travel, we introduce AI-enabled digital twins that have ``lived through'' simulated life scenarios. Rather than predicting optimal outcomes, these simulations extend prospective cognition by making alternative futures vivid enough to support deliberation without assuming which path is best. We evaluate this idea in a randomized controlled study (N=192) using multimodal synthesis - facial age progression, voice cloning, and large language model dialogue - to create personalized avatars representing participants 30 years forward. Young adults 18 to 28 years old described pending binary decisions and were assigned to guided imagination or one of four avatar conditions: single-option, balanced dual-option, or expanded three-option with a system-generated novel alternative. Results showed asymmetric effects: single-sided avatars increased shifts toward the presented option, while balanced presentation produced movement toward both. Introducing a system-generated third option increased adoption of this new alternative compared to control, suggesting that AI-generated future selves can expand choice by surfacing paths that might otherwise go unnoticed. Participants rated evaluative reasoning and eudaimonic meaning-making as more important than emotional or visual vividness. Perceived persuasiveness and baseline agency predicted decision change. These findings advance understanding of AI-mediated episodic prospection and raise questions about autonomy in AI-augmented decisions.


Ethically-Aware Participatory Design of a Productivity Social Robot for College Students

Lalwani, Himanshi, Salam, Hanan

arXiv.org Artificial Intelligence

College students often face academic and life stressors affecting productivity, especially students with Attention Deficit Hyperactivity Disorder (ADHD) who experience executive functioning challenges. Conventional productivity tools typically demand sustained self-discipline and consistent use, which many students struggle with, leading to disruptive app-switching behaviors. Socially Assistive Robots (SARs), known for their intuitive and interactive nature, offer promising potential to support productivity in academic environments, having been successfully utilized in domains like education, cognitive development, and mental health. To leverage SARs effectively in addressing student productivity, this study employed a Participatory Design (PD) approach, directly involving college students and a Student Success and Well-Being Coach in the design process. Through interviews and a collaborative workshop, we gathered detailed insights on productivity challenges and identified desirable features for a productivity-focused SAR. Importantly, ethical considerations were integrated from the onset, facilitating responsible and user-aligned design choices. Our contributions include comprehensive insights into student productivity challenges, SAR design preferences, and actionable recommendations for effective robot characteristics. Additionally, we present stakeholder-derived ethical guidelines to inform responsible future implementations of productivity-focused SARs in higher education.


Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor

Olteanu, Alexandra, Blodgett, Su Lin, Balayn, Agathe, Wang, Angelina, Diaz, Fernando, Calmon, Flavio du Pin, Mitchell, Margaret, Ekstrand, Michael, Binns, Reuben, Barocas, Solon

arXiv.org Artificial Intelligence

In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about the capabilities of AI systems. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception -- in addition to a more expansive understanding of (1) methodological rigor -- should include aspects related to (2) what background knowledge informs what to work on (epistemic rigor); (3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); (4) how clearly articulated the theoretical constructs under use are (conceptual rigor); (5) what is reported and how (reporting rigor); and (6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also provide useful language and a framework for much-needed dialogue about the AI community's work by researchers, policymakers, journalists, and other stakeholders.