Goto

Collaborating Authors

 inclusivity


From Framework to Reliable Practice: End-User Perspectives on Social Robots in Public Spaces

Oruma, Samson, Colomo-Palacios, Ricardo, Gkioulos, Vasileios

arXiv.org Artificial Intelligence

As social robots increasingly enter public environments, their acceptance depends not only on technical reliability but also on ethical integrity, accessibility, and user trust. This paper reports on a pilot deployment of an ARI social robot functioning as a university receptionist, designed in alignment with the SecuRoPS framework for secure and ethical social robot deployment. Thirty-five students and staff interacted with the robot and provided structured feedback on safety, privacy, usability, accessibility, and transparency. The results show generally positive perceptions of physical safety, data protection, and ethical behavior, while also highlighting challenges related to accessibility, inclusiveness, and dynamic interaction. Beyond the empirical findings, the study demonstrates how theoretical frameworks for ethical and secure design can be implemented in real-world contexts through end-user evaluation. It also provides a public GitHub repository containing reusable templates for ARI robot applications to support reproducibility and lower the entry barrier for new researchers. By combining user perspectives with practical technical resources, this work contributes to ongoing discussions in AI and society and supports the development of trustworthy, inclusive, and ethically responsible social robots for public spaces.


Street Review: A Participatory AI-Based Framework for Assessing Streetscape Inclusivity

Mushkani, Rashid, Koseki, Shin

arXiv.org Artificial Intelligence

City streets, sidewalks, and public areas often serve as primary interaction points among diverse user groups, including residents, commuters, and visitors ( Gehl, 2011). These spaces carry social, economic, and cultural signifi - cance that influences navigation and user experience ( Mitra ˇ sinovi c & Mehta, 2021). Municipal governments and planning agencies recognize the importance of inclusive public spaces but face challenges in operation - alizing inclusivity ( Anttiroiko & De Jong, 2020). Traditional approaches may draw on universal design principles intended to accommodate a broad range of users, but these frameworks often take a one-size-fits-all approach that prioritizes physical accessibility over the social and cul - tural dimensions of public space use ( Low, 2020). In multicultural cities, where multiple languages, cultures, and religious practices converge, these complexities become particularly evident ( Fan et al., 2023; Lit - man, 2025; Salgado et al., 2021; Youngbloom et al., 2023). Research on inclusive design has provided valuable insights, but few methods combine qualitative depth with quantitative scale to under - stand inclusivity in urban contexts ( Anttiroiko & De Jong, 2020; Mehta, 2019; Zamanifard et al., 2019). Ethnographic research and interviews offer detailed perspectives on lived experience, while computer vision and machine learning enable assessments at larger scales ( Ibrahim et al., 2020). However, large-scale computational approaches often overlook intersectional dimensions ( Zhu et al., 2025). This gap calls for integrated models that merge qualitative and quantitative methodologies.


Inclusive, Differentially Private Federated Learning for Clinical Data

Parampottupadam, Santhosh, Coşğun, Melih, Pati, Sarthak, Zenk, Maximilian, Roy, Saikat, Bounias, Dimitrios, Hamm, Benjamin, Sav, Sinem, Floca, Ralf, Maier-Hein, Klaus

arXiv.org Artificial Intelligence

Federated Learning (FL) offers a promising approach for training clinical AI models without centralizing sensitive patient data. However, its real-world adoption is hindered by challenges related to privacy, resource constraints, and compliance. Existing Differential Privacy (DP) approaches often apply uniform noise, which disproportionately degrades model performance, even among well-compliant institutions. In this work, we propose a novel compliance-aware FL framework that enhances DP by adaptively adjusting noise based on quantifiable client compliance scores. Additionally, we introduce a compliance scoring tool based on key healthcare and security standards to promote secure, inclusive, and equitable participation across diverse clinical settings. Extensive experiments on public datasets demonstrate that integrating under-resourced, less compliant clinics with highly regulated institutions yields accuracy improvements of up to 15% over traditional FL. This work advances FL by balancing privacy, compliance, and performance, making it a viable solution for real-world clinical workflows in global healthcare.


SCOR: A Framework for Responsible AI Innovation in Digital Ecosystems

Torkestani, Mohammad Saleh, Mansouri, Taha

arXiv.org Artificial Intelligence

AI-driven digital ecosystems span diverse stakeholders including technology firms, regulators, accelerators and civil society, yet often lack cohesive ethical governance. This paper proposes a four-pillar framework (SCOR) to embed accountability, fairness, and inclusivity across such multi-actor networks. Leveraging a design science approach, we develop a Shared Ethical Charter(S), structured Co-Design and Stakeholder Engagement protocols(C), a system of Continuous Oversight and Learning(O), and Adaptive Regulatory Alignment strategies(R). Each component includes practical guidance, from lite modules for resource-constrained start-ups to in-depth auditing systems for larger consortia. Through illustrative vignettes in healthcare, finance, and smart city contexts, we demonstrate how the framework can harmonize organizational culture, leadership incentives, and cross-jurisdictional compliance. Our mixed-method KPI design further ensures that quantitative targets are complemented by qualitative assessments of user trust and cultural change. By uniting ethical principles with scalable operational structures, this paper offers a replicable pathway toward responsible AI innovation in complex digital ecosystems.


A Question Bank to Assess AI Inclusivity: Mapping out the Journey from Diversity Errors to Inclusion Excellence

Shams, Rifat Ara, Zowghi, Didar, Bano, Muneera

arXiv.org Artificial Intelligence

Ensuring diversity and inclusion (D&I) in artificial intelligence (AI) is crucial for mitigating biases and promoting equitable decision-making. However, existing AI risk assessment frameworks often overlook inclusivity, lacking standardized tools to measure an AI system's alignment with D&I principles. This paper introduces a structured AI inclusivity question bank, a comprehensive set of 253 questions designed to evaluate AI inclusivity across five pillars: Humans, Data, Process, System, and Governance. The development of the question bank involved an iterative, multi-source approach, incorporating insights from literature reviews, D&I guidelines, Responsible AI frameworks, and a simulated user study. The simulated evaluation, conducted with 70 AI-generated personas related to different AI jobs, assessed the question bank's relevance and effectiveness for AI inclusivity across diverse roles and application domains. The findings highlight the importance of integrating D&I principles into AI development workflows and governance structures. The question bank provides an actionable tool for researchers, practitioners, and policymakers to systematically assess and enhance the inclusivity of AI systems, paving the way for more equitable and responsible AI technologies.


Inclusivity of AI Speech in Healthcare: A Decade Look Back

Larasati, Retno

arXiv.org Artificial Intelligence

The integration of AI speech recognition technologies into healthcare has the potential to revolutionize clinical workflows and patient-provider communication. However, this study reveals significant gaps in inclusivity, with datasets and research disproportionately favouring high-resource languages, standardized accents, and narrow demographic groups. These biases risk perpetuating healthcare disparities, as AI systems may misinterpret speech from marginalized groups. This paper highlights the urgent need for inclusive dataset design, bias mitigation research, and policy frameworks to ensure equitable access to AI speech technologies in healthcare.


RAIL in the Wild: Operationalizing Responsible AI Evaluation Using Anthropic's Value Dataset

Verma, Sumit, Prasun, Pritam, Jaiswal, Arpit, Kumar, Pritish

arXiv.org Artificial Intelligence

As AI systems become embedded in real-world applications, ensuring they meet ethical standards is crucial. While existing AI ethics frameworks emphasize fairness, transparency, and accountability, they often lack actionable evaluation methods. This paper introduces a systematic approach using the Responsible AI Labs (RAIL) framework, which includes eight measurable dimensions to assess the normative behavior of large language models (LLMs). We apply this framework to Anthropic's "Values in the Wild" dataset, containing over 308,000 anonymized conversations with Claude and more than 3,000 annotated value expressions. Our study maps these values to RAIL dimensions, computes synthetic scores, and provides insights into the ethical behavior of LLMs in real-world use.


Negotiative Alignment: Embracing Disagreement to Achieve Fairer Outcomes -- Insights from Urban Studies

Mushkani, Rashid, Berard, Hugo, Koseki, Shin

arXiv.org Artificial Intelligence

Cities are not monolithic; they are arenas of negotiation among groups that hold varying needs, values, and experiences. Conventional methods of urban assessment -- from standardized surveys to AI-driven evaluations -- frequently rely on a single consensus metric (e.g., an average measure of inclusivity or safety). Although such aggregations simplify design decisions, they risk obscuring the distinct perspectives of marginalized populations. In this paper, we present findings from a community-centered study in Montreal involving 35 residents with diverse demographic and social identities, particularly wheelchair users, seniors, and LGBTQIA2+ individuals. Using rating and ranking tasks on 20 urban sites, we observe that disagreements are systematic rather than random, reflecting structural inequalities, differing cultural values, and personal experiences of safety and accessibility. Based on these empirical insights, we propose negotiative alignment, an AI framework that treats disagreement as an essential input to be preserved, analyzed, and addressed. Negotiative alignment builds on pluralistic models by dynamically updating stakeholder preferences through multi-agent negotiation mechanisms, ensuring no single perspective is marginalized. We outline how this framework can be integrated into urban analytics -- and other decision-making contexts -- to retain minority viewpoints, adapt to changing stakeholder concerns, and enhance fairness and accountability. The study demonstrates that preserving and engaging with disagreement, rather than striving for an artificial consensus, can produce more equitable and responsive AI-driven outcomes in urban design.


What is Ethical: AIHED Driving Humans or Human-Driven AIHED? A Conceptual Framework enabling the Ethos of AI-driven Higher education

Mahajan, Prashant

arXiv.org Artificial Intelligence

The rapid integration of Artificial Intelligence (AI) in Higher Education (HE) is transforming personalized learning, administrative automation, and decision-making. However, this progress presents a duality, as AI adoption also introduces ethical and institutional challenges, including algorithmic bias, data privacy risks, and governance inconsistencies. To address these concerns, this study introduces the Human-Driven AI in Higher Education (HD-AIHED) Framework, ensuring compliance with UNESCO and OECD ethical standards. This conceptual research employs a qualitative meta-synthesis approach, integrating qualitative and quantitative studies to identify patterns, contradictions, and gaps in AI adoption within HE. It reinterprets existing datasets through theoretical and ethical lenses to develop governance frameworks. The study applies a participatory integrated co-system, Phased Human Intelligence, SWOC analysis, and AI ethical review boards to assess AI readiness and governance strategies for universities and HE institutions. The HD-AIHED model bridges AI research gaps, addresses global real-time challenges, and provides tailored, scalable, and ethical strategies for diverse educational contexts. By emphasizing interdisciplinary collaboration among stakeholders, this study envisions AIHED as a transparent and equitable force for innovation. The HD-AIHED framework ensures AI acts as a collaborative and ethical enabler rather than a disruptive replacement for human intelligence while advocating for responsible AI implementation in HE.


Mitigating Bias in Queer Representation within Large Language Models: A Collaborative Agent Approach

Huang, Tianyi, Somasundaram, Arya

arXiv.org Artificial Intelligence

Large Language Models (LLMs) often perpetuate biases in pronoun usage, leading to misrepresentation or exclusion of queer individuals. This paper addresses the specific problem of biased pronoun usage in LLM outputs, particularly the inappropriate use of traditionally gendered pronouns ("he," "she") when inclusive language is needed to accurately represent all identities. We introduce a collaborative agent pipeline designed to mitigate these biases by analyzing and optimizing pronoun usage for inclusivity. Our multi-agent framework includes specialized agents for both bias detection and correction. Experimental evaluations using the Tango dataset-a benchmark focused on gender pronoun usage-demonstrate that our approach significantly improves inclusive pronoun classification, achieving a 32.6 percentage point increase over GPT-4o in correctly disagreeing with inappropriate traditionally gendered pronouns $(\chi^2 = 38.57, p < 0.0001)$. These results accentuate the potential of agent-driven frameworks in enhancing fairness and inclusivity in AI-generated content, demonstrating their efficacy in reducing biases and promoting socially responsible AI.