Goto

Collaborating Authors

 Korpan, Raj


Hybrid Voting-Based Task Assignment in Role-Playing Games

arXiv.org Artificial Intelligence

In role-playing games (RPGs), the level of immersion is critical-especially when an in-game agent conveys tasks, hints, or ideas to the player. For an agent to accurately interpret the player's emotional state and contextual nuances, a foundational level of understanding is required, which can be achieved using a Large Language Model (LLM). Maintaining the LLM's focus across multiple context changes, however, necessitates a more robust approach, such as integrating the LLM with a dedicated task allocation model to guide its performance throughout gameplay. In response to this need, we introduce Voting-Based Task Assignment (VBTA), a framework inspired by human reasoning in task allocation and completion. VBTA assigns capability profiles to agents and task descriptions to tasks, then generates a suitability matrix that quantifies the alignment between an agent's abilities and a task's requirements. Leveraging six distinct voting methods, a pre-trained LLM, and integrating conflict-based search (CBS) for path planning, VBTA efficiently identifies and assigns the most suitable agent to each task. While existing approaches focus on generating individual aspects of gameplay, such as single quests, or combat encounters, our method shows promise when generating both unique combat encounters and narratives because of its generalizable nature.


Encoding Inequity: Examining Demographic Bias in LLM-Driven Robot Caregiving

arXiv.org Artificial Intelligence

As robots take on caregiving roles, ensuring equitable and unbiased interactions with diverse populations is critical. Although Large Language Models (LLMs) serve as key components in shaping robotic behavior, speech, and decision-making, these models may encode and propagate societal biases, leading to disparities in care based on demographic factors. This paper examines how LLM-generated responses shape robot caregiving characteristics and responsibilities when prompted with different demographic information related to sex, gender, sexuality, race, ethnicity, nationality, disability, and age. Findings show simplified descriptions for disability and age, lower sentiment for disability and LGBTQ+ identities, and distinct clustering patterns reinforcing stereotypes in caregiving narratives. These results emphasize the need for ethical and inclusive HRI design.


Dialogue with Robots: Proposals for Broadening Participation and Research in the SLIVAR Community

arXiv.org Artificial Intelligence

The ability to interact with machines using natural human language is becoming not just commonplace, but expected. The next step is not just text interfaces, but speech interfaces and not just with computers, but with all machines including robots. In this paper, we chronicle the recent history of this growing field of spoken dialogue with robots and offer the community three proposals, the first focused on education, the second on benchmarks, and the third on the modeling of language when it comes to spoken interaction with robots. The three proposals should act as white papers for any researcher to take and build upon.


Towards a Participatory and Social Justice-Oriented Measure of Human-Robot Trust

arXiv.org Artificial Intelligence

Many measures of human-robot trust have proliferated across the HRI research literature because each attempts to capture the factors that impact trust despite its many dimensions. None of the previous trust measures, however, address the systems of inequity and structures of power present in HRI research or attempt to counteract the systematic biases and potential harms caused by HRI systems. This position paper proposes a participatory and social justice-oriented approach for the design and evaluation of a trust measure. This proposed process would iteratively co-design the trust measure with the community for whom the HRI system is being created. The process would prioritize that community's needs and unique circumstances to produce a trust measure that accurately reflects the factors that impact their trust in a robot.


Trust in Queer Human-Robot Interaction

arXiv.org Artificial Intelligence

Human-robot interaction (HRI) systems need to build trust with people of diverse identities. This position paper argues that queer (LGBTQIA+) people must be included in the design and evaluation of HRI systems to ensure their trust in and acceptance of robots. Queer people have faced discrimination and harm from artificial intelligence and robotic systems. Despite calls for increased diversity and inclusion, HRI has not systemically addressed queer issues. This paper suggests three approaches to address trust in queer HRI: diversifying human-subject pools, centering queer people in HRI studies, and contextualizing measures of trust.


VBMO: Voting-Based Multi-Objective Path Planning

arXiv.org Artificial Intelligence

This paper presents VBMO, the Voting-Based Multi-Objective path planning algorithm, that generates optimal single-objective plans, evaluates each of them with respect to the other objectives, and selects one with a voting mechanism. VBMO does not use hand-tuned weights, consider the multiple objectives at every step of search, or use an evolutionary algorithm. Instead, it considers how a plan that is optimal in one objective may perform well with respect to others. VBMO incorporates three voting mechanisms: range, Borda, and combined approval. Extensive evaluation in diverse and complex environments demonstrates the algorithm's ability to efficiently produce plans that satisfy multiple objectives.


Queer In AI: A Case Study in Community-Led Participatory AI

arXiv.org Artificial Intelligence

We present Queer in AI as a case study for community-led participatory design in AI. We examine how participatory design and intersectional tenets started and shaped this community's programs over the years. We discuss different challenges that emerged in the process, look at ways this organization has fallen short of operationalizing participatory and intersectional principles, and then assess the organization's impact. Queer in AI provides important lessons and insights for practitioners and theorists of participatory methods broadly through its rejection of hierarchy in favor of decentralization, success at building aid and programs by and for the queer community, and effort to change actors and institutions outside of the queer community. Finally, we theorize how communities like Queer in AI contribute to the participatory design in AI more broadly by fostering cultures of participation in AI, welcoming and empowering marginalized participants, critiquing poor or exploitative participatory practices, and bringing participation to institutions outside of individual research projects. Queer in AI's work serves as a case study of grassroots activism and participatory methods within AI, demonstrating the potential of community-led participatory methods and intersectional praxis, while also providing challenges, case studies, and nuanced insights to researchers developing and using participatory methods.


MengeROS: A Crowd Simulation Tool for Autonomous Robot Navigation

AAAI Conferences

While effective navigation in large, crowded environments is essential for an autonomous robot, preliminary testing of algorithms to support it requires simulation across a broad range of crowd scenarios. Most available simulation tools provide either realistic crowds without robots or realistic robots without realistic crowds. This paper introduces MengeROS, a 2-D simulator that realistically integrates multiple robots and crowds. MengeROS provides a broad range of settings in which to test the capabilities and performance of navigation algorithms designed for large crowded environments.