Lohan, Katrin
Proceedings of the AI-HRI Symposium at AAAI-FSS 2020
Bagchi, Shelly, Wilson, Jason R., Ahmad, Muneeb I., Dondrup, Christian, Han, Zhao, Hart, Justin W., Leonetti, Matteo, Lohan, Katrin, Mead, Ross, Senft, Emmanuel, Sinapov, Jivko, Zimmerman, Megan L.
The Artificial Intelligence (AI) for Human-Robot Interaction (HRI) Symposium has been a successful venue of discussion and collaboration since 2014. In that time, the related topic of trust in robotics has been rapidly growing, with major research efforts at universities and laboratories across the world. Indeed, many of the past participants in AI-HRI have been or are now involved with research into trust in HRI. While trust has no consensus definition, it is regularly associated with predictability, reliability, inciting confidence, and meeting expectations. Furthermore, it is generally believed that trust is crucial for adoption of both AI and robotics, particularly when transitioning technologies from the lab to industrial, social, and consumer applications. However, how does trust apply to the specific situations we encounter in the AI-HRI sphere? Is the notion of trust in AI the same as that in HRI? We see a growing need for research that lives directly at the intersection of AI and HRI that is serviced by this symposium. Over the course of the two-day meeting, we propose to create a collaborative forum for discussion of current efforts in trust for AI-HRI, with a sub-session focused on the related topic of explainable AI (XAI) for HRI.
Trust and Cognitive Load During Human-Robot Interaction
Ahmad, Muneeb Imtiaz, Bernotat, Jasmin, Lohan, Katrin, Eyssel, Friederike
This paper presents an exploratory study to understand the relationship between a humans' cognitive load, trust, and anthropomorphism during human-robot interaction. To understand the relationship, we created a \say{Matching the Pair} game that participants could play collaboratively with one of two robot types, Husky or Pepper. The goal was to understand if humans would trust the robot as a teammate while being in the game-playing situation that demanded a high level of cognitive load. Using a humanoid vs. a technical robot, we also investigated the impact of physical anthropomorphism and we furthermore tested the impact of robot error rate on subsequent judgments and behavior. Our results showed that there was an inversely proportional relationship between trust and cognitive load, suggesting that as the amount of cognitive load increased in the participants, their ratings of trust decreased. We also found a triple interaction impact between robot-type, error-rate and participant's ratings of trust. We found that participants perceived Pepper to be more trustworthy in comparison with the Husky robot after playing the game with both robots under high error-rate condition. On the contrary, Husky was perceived as more trustworthy than Pepper when it was depicted as featuring a low error-rate. Our results are interesting and call further investigation of the impact of physical anthropomorphism in combination with variable error-rates of the robot.
Spotting Social Interaction by Using the Robot Energy Consumption
Lohan, Katrin (Heriot-Watt University, Edinburgh) | Deshmukh, Amol (Heriot-Watt University, Edinburgh) | Lim, Mei Yii (Heriot-Watt University, Edinburgh) | Aylett, Ruth (Heriot-Watt University, Edinburgh)
A study of long-term interaction with the robot embodiment of the companion called Sarah was conducted during the summer of 2012. The aim of the study was to see long-term implications when the robot embodiment was in a natural setting. The robot interacted with 5 participants for 3 weeks in a office environment running continuously.