Goto

Collaborating Authors

 Reardon, Christopher


EVOLVE: Emotion and Visual Output Learning via LLM Evaluation

arXiv.org Artificial Intelligence

Additionally, this kind of subdivided action While the ability to effectively communicate and retain schema can be used to evaluate many attributes towards user attention for longer periods of time is important in many promoting empathetic responses, including tone of voice, HRI settings, eliciting an impression of empathy through nonverbal cues, and facial expressions [6]. However, atomic nonverbal behavior can be critical to acceptance of and trust actions with limited sentiments might not be sufficient to in social robots [1]. Through a comprehensive survey over accommodate complex emotion in the user. This work investigates several LLM-based actions, [2] discovered that social robots the possibility of a more open-ended response elicited higher expectations for more nuanced nonverbal cues selection by leveraging an LLM's internal domain knowledge including a breadth of behavior types. Conveying affects that of emojis and other affective imagery capable of representing are aligned with the user's emotional state can be critical emotional states. We also employ recent advances in visionlanguage in building trust around experienced empathy and personalization models with an image or camera input, as suggested from a social robot [3]. Multi-modal feedback have in [2] and [4]. Additionally, we evaluate both motion and profound impacts on successful empathetic interaction, as color [7] pattern elicitation through atomic action selection notions inferred from robot actions can be understood much [5], [6]. We selected these decision categories based on a easier with systematic actions taken in alignment with an theoretical robot design that could contain an LED strip emotional response [2], [4].


Compositional Zero-Shot Learning for Attribute-Based Object Reference in Human-Robot Interaction

arXiv.org Artificial Intelligence

Language-enabled robots have been widely studied over the past years to enable natural human-robot interaction and teaming in various real-world applications. Language-enabled robots must be able to comprehend referring expressions to identify a particular object from visual perception using a set of referring attributes extracted from natural language. However, visual observations of an object may not be available when it is referred to, and the number of objects and attributes may also be unbounded in open worlds. To address the challenges, we implement an attribute-based compositional zero-shot learning method that uses a list of attributes to perform referring expression comprehension in open worlds. We evaluate the approach on two datasets including the MIT-States and the Clothing 16K. The preliminary experimental results show that our implemented approach allows a robot to correctly identify the objects referred to by human commands.


Enabling Intuitive Human-Robot Teaming Using Augmented Reality and Gesture Control

arXiv.org Artificial Intelligence

Human-robot teaming offers great potential because of the opportunities to combine strengths of heterogeneous agents. However, one of the critical challenges in realizing an effective human-robot team is efficient information exchange - both from the human to the robot as well as from the robot to the human. In this work, we present and analyze an augmented reality-enabled, gesture-based system that supports intuitive human-robot teaming through improved information exchange. Our proposed system requires no external instrumentation aside from human-wearable devices and shows promise of real-world applicability for service-oriented missions. Additionally, we present preliminary results from a pilot study with human participants, and highlight lessons learned and open research questions that may help direct future development, fielding, and experimentation of autonomous HRI systems.