Using Vision-Language Models as Proxies for Social Intelligence in Human-Robot Interaction
Bu, Fanjun, Tsai, Melina, Tjokro, Audrey, Bhattacharjee, Tapomayukh, Ortiz, Jorge, Ju, Wendy
–arXiv.org Artificial Intelligence
Robots operating in everyday environments must often decide when and whether to engage with people, yet such decisions often hinge on subtle nonverbal cues that unfold over time and are difficult to model explicitly. Drawing on a five-day Wizard-of-Oz deployment of a mobile service robot in a university cafe, we analyze how people signal interaction readiness through nonverbal behaviors and how expert wizards use these cues to guide engagement. Motivated by these observations, we propose a two-stage pipeline in which lightweight perceptual detectors (gaze shifts and proxemics) are used to selectively trigger heavier video-based vision-language model (VLM) queries at socially meaningful moments. We evaluate this pipeline on replayed field interactions and compare two prompting strategies. Our findings suggest that selectively using VLMs as proxies for social reasoning enables socially responsive robot behavior, allowing robots to act appropriately by attending to the cues people naturally provide in real-world interactions.
arXiv.org Artificial Intelligence
Dec-9-2025
- Country:
- Asia
- Japan > Shikoku
- Kagawa Prefecture > Takamatsu (0.04)
- Middle East > Iran
- Ilam Province (0.04)
- Japan > Shikoku
- North America
- Montserrat (0.04)
- United States
- Colorado > Boulder County
- Boulder (0.04)
- New Jersey > Middlesex County
- New Brunswick (0.04)
- New York > New York County
- New York City (0.05)
- Colorado > Boulder County
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Technology: