HAPI: A Model for Learning Robot Facial Expressions from Human Preferences
Yang, Dongsheng, Liu, Qianying, Sato, Wataru, Minato, Takashi, Liu, Chaoran, Nishida, Shin'ya
–arXiv.org Artificial Intelligence
Automatic robotic facial expression generation is crucial for human-robot interaction, as handcrafted methods based on fixed joint configurations often yield rigid and unnatural behaviors. Although recent automated techniques reduce the need for manual tuning, they tend to fall short by not adequately bridging the gap between human preferences and model predictions-resulting in a deficiency of nuanced and realistic expressions due to limited degrees of freedom and insufficient perceptual integration. In this work, we propose a novel learning-to-rank framework that leverages human feedback to address this discrepancy and enhanced the expressiveness of robotic faces. Specifically, we conduct pairwise comparison annotations to collect human preference data and develop the Human Affective Pairwise Impressions (HAPI) model, a Siamese RankNet-based approach that refines expression evaluation. Results obtained via Bayesian Optimization and online expression survey on a 35-DOF android platform demonstrate that our approach produces significantly more realistic and socially resonant expressions of Anger, Happiness, and Surprise than those generated by baseline and expert-designed methods. This confirms that our framework effectively bridges the gap between human preferences and model predictions while robustly aligning robotic expression generation with human affective responses.
arXiv.org Artificial Intelligence
Mar-21-2025
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.47)
- Robots (1.00)
- Vision > Face Recognition (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence