Expectations, Explanations, and Embodiment: Attempts at Robot Failure Recovery

Yadollahi, Elmira, Dogan, Fethiye Irmak, Zhang, Yujing, Nogueira, Beatriz, Guerreiro, Tiago, Tzedek, Shelly Levy, Leite, Iolanda

arXiv.org Artificial Intelligence 

Expectations critically shape how people form judgments about robots, influencing whether they view failures as minor technical glitches or deal-breaking flaws. This work explores how high and low expectations, induced through brief video priming, affect user perceptions of robot failures and the utility of explanations in HRI. We conducted two online studies ( N = 600 total participants); each replicated two robots with different embodiments, Furhat and Pepper. In our first study, grounded in expectation theory, participants were divided into two groups, one primed with positive and the other with negative expectations regarding the robot's performance, establishing distinct expectation frameworks. This validation study aimed to verify whether the videos could reliably establish low and high-expectation profiles. In the second study, participants were primed using the validated videos and then viewed a new scenario in which the robot failed at a task. Half viewed a version where the robot explained its failure, while the other half received no explanation. We found that explanations significantly improved user perceptions of Furhat, especially when participants were primed to have lower expectations. Explanations boosted satisfaction and enhanced the robot's perceived expressiveness, indicating that effectively communicat-Authors contributed equally. By contrast, Pepper's explanations produced minimal impact on user attitudes, suggesting that a robot's embodiment and style of interaction could determine whether explanations can successfully offset negative impressions. Together, these findings underscore the need to consider users' expectations when tailoring explanation strategies in HRI. When expectations are initially low, a cogent explanation can make the difference between dismissing a failure and appreciating the robot's transparency and effort to communicate. Keywords: Expectations, Explanations, Explainability, Human-Robot Interaction, Priming 1. Introduction When robots operate in human environments, user expectations play a crucial role in shaping human-robot interaction (HRI) (Lohse, 2009; Horstmann and Kr amer, 2020; Dogan et al., 2025). However, there is often a mismatch between these expectations and the actual capabilities of social robots (Ros en et al., 2022), which can lead to disappointment and, consequently, diminish the quality of interactions (Olson et al., 1996; Kruglanski and Sleeth-Keppler, 2007). For instance, a user might expect robots to function as proactive and autonomous assistants, yet when robots make mistakes due to their limited abilities, this mismatch can undermine the robot's perceived trustworthiness and competence (Salem et al., 2015; Cha et al., 2015).