Sense and Sensibility: What makes a social robot convincing to high-school students?

Gonzalez-Oliveras, Pablo, Engwall, Olov, Majlesi, Ali Reza

arXiv.org Artificial Intelligence 

Sense and Sensibility: What makes a social robot convincing to high-school students? Abstract --This study with 40 high-school students demonstrates the high influence of a social educational robot on students' decision-making for a set of eight true-false questions on electric circuits, for which the theory had been covered in the students' courses. The robot argued for the correct answer on six questions and the wrong on two, and 75% of the students were persuaded by the robot to perform beyond their expected capacity, positively when the robot was correct and negatively when it was wrong. Students with more experience of using large language models were even more likely to be influenced by the robot's stance - in particular for the two easiest questions on which the robot was wrong - suggesting that familiarity with AI can increase susceptibility to misinformation by AI. We further examined how three different levels of portrayed robot certainty, displayed using semantics, prosody and facial signals, affected how the students aligned with the robot's answer on specific questions and how convincing they perceived the robot to be on these questions. The students aligned with the robot's answers in 94.4% of the cases when the robot was portrayed as Certain, 82.6% when it was Neutral and 71.4% when it was Uncertain. The alignment was thus high for all conditions, highlighting students' general susceptibility to accept the robot's stance, but alignment in the Uncertain condition was significantly lower than in the Certain. Post-test questionnaire answers further show that students found the robot most convincing when it was portrayed as Certain. These findings highlight the need for educational robots to adjust their display of certainty based on the reliability of the information they convey, to promote students' critical thinking and reduce undue influence. Educational robots are becoming more common and they have significant potential in, e.g., STEM (science, technology, engineering and mathematics) education [46, 69, 17], offering students realistic and natural interactions, not the least by employing Large Language Models (LLMs), as demonstrated in several recent studies [41, 68, 67]. However, it is also well-known that while the LLMs' linguistic proficiency is often astonishing, their factual "knowledge" in STEM subjects is flawed, and incorrect statements occur frequently [34, 60]. Since robots can exert high informational social influence [38, 24, 25, 55, 56] and students will align with the robot's views to large extents [27], the positive as well as negative effects of learning with a social robot need to be considered: Students need to use critical thinking to decide if they should accept the robot's propositions [63]. Educators need to understand which students are more at risk of being misled by a robot presenting incorrect STEM facts, to provide in-time support.