Rossi, Alessandra
Increasing Transparency of Reinforcement Learning using Shielding for Human Preferences and Explanations
Angelopoulos, Georgios, Mangiacapra, Luigi, Rossi, Alessandra, Di Napoli, Claudia, Rossi, Silvia
The adoption of Reinforcement Learning (RL) in several human-centred applications provides robots with autonomous decision-making capabilities and adaptability based on the observations of the operating environment. In such scenarios, however, the learning process can make robots' behaviours unclear and unpredictable to humans, thus preventing a smooth and effective Human-Robot Interaction (HRI). As a consequence, it becomes crucial to avoid robots performing actions that are unclear to the user. In this work, we investigate whether including human preferences in RL (concerning the actions the robot performs during learning) improves the transparency of a robot's behaviours. For this purpose, a shielding mechanism is included in the RL algorithm to include human preferences and to monitor the learning agent's decisions. We carried out a within-subjects study involving 26 participants to evaluate the robot's transparency in terms of Legibility, Predictability, and Expectability in different settings. Results indicate that considering human preferences during learning improves Legibility with respect to providing only Explanations, and combining human preferences with explanations elucidating the rationale behind the robot's decisions further amplifies transparency. Results also confirm that an increase in transparency leads to an increase in the safety, comfort, and reliability of the robot. These findings show the importance of transparency during learning and suggest a paradigm for robotic applications with human in the loop.
Common (good) practices measuring trust in HRI
Holthaus, Patrick, Rossi, Alessandra
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives. It is, therefore, understandable that the literature of the last few decades focuses on measuring how much people trust robots -- and more generally, any agent - to foster such trust in these technologies. Researchers have been exploring how people trust robot in different ways, such as measuring trust on human-robot interactions (HRI) based on textual descriptions or images without any physical contact, during and after interacting with the technology. Nevertheless, trust is a complex behaviour, and it is affected and depends on several factors, including those related to the interacting agents (e.g. humans, robots, pets), itself (e.g. capabilities, reliability), the context (e.g. task), and the environment (e.g. public spaces vs private spaces vs working spaces). In general, most roboticists agree that insufficient levels of trust lead to a risk of disengagement while over-trust in technology can cause over-reliance and inherit dangers, for example, in emergency situations. It is, therefore, very important that the research community has access to reliable methods to measure people's trust in robots and technology. In this position paper, we outline current methods and their strengths, identify (some) weakly covered aspects and discuss the potential for covering a more comprehensive amount of factors influencing trust in HRI.
SCRITA 2023: Trust, Acceptance and Social Cues in Human-Robot Interaction
Rossi, Alessandra, Holthaus, Patrick, Lakatos, Gabriella, Moros, Sรญlvia, Riches, Lewis
This workshop focused on identifying the challenges and dynamics between people and robots to foster short interactions and long-lasting relationships in different fields, from educational, service, collaborative, companion, care-home and medical robotics. For that, this workshop facilitated a discussion about people's trust towards robots "in the field", inviting workshop participants to contribute their past experiences and lessons learnt.
Current and Future Challenges in Humanoid Robotics -- An Empirical Investigation
Paetzel-Prรผsmann, Maike, Rossi, Alessandra, Keijsers, Merel
The goal of RoboCup is to make research in the area of robotics measurable over time, and grow a community that works together to solve increasingly difficult challenges over the years. The most ambitious of these challenges it to be able to play against the human world champions in soccer in 2050. To better understand what members of the RoboCup community believes to be the state of the art and the main challenges in the next decade and towards the 2050 game, we developed a survey and distributed it to members of different experience level and background within the community. We present data from 39 responses. Results highlighted that locomotion, awareness and decision-making, and robustness of robots are among those considered of high importance for the community, while human-robot interaction and natural language processing and generation are rated of low in importance and difficulty.
The Road to a Successful HRI: AI, Trust and ethicS-TRAITS
Andriella, Antonio, Rossi, Alessandra, Rossi, Silvia, van Maris, Anouk
The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter-and multi-disciplinary nature of the relationships between people and robots towards effective and long-lasting collaborations. This workshop will provide a forum for the HRI and robotics communities to explore successful human-robot interaction (HRI) to analyse the different aspects of HRI that impact its success. Particular focus are the AI algorithms required to implement autonomous interactions, and the factors that enhance, undermine, or recover humans' trust in robots. Finally, potential ethical and legal concerns, and how they can be addressed will be considered. Website: https://sites. google. com/view/traits-hri
Bold Hearts Team Description for RoboCup 2019 (Humanoid Kid Size League)
Scheunemann, Marcus M., van Dijk, Sander G., Miko, Rebecca, Barry, Daniel, Evans, George M., Rossi, Alessandra, Polani, Daniel
We participated in the RoboCup 2018 competition in Montreal with our newly developed BoldBot based on the Darwin-OP and mostly self-printed custom parts. This paper is about the lessons learnt from that competition and further developments for the RoboCup 2019 competition. Firstly, we briefly introduce the team along with an overview of past achievements. We then present a simple, standalone 2D simulator we use for simplifying the entry for new members with making basic RoboCup concepts quickly accessible. We describe our approach for semantic-segmentation for our vision used in the 2018 competition, which replaced the lookup-table (LUT) implementation we had before. We also discuss the extra structural support we plan to add to the printed parts of the BoldBot and our transition to ROS 2 as our new middleware. Lastly, we will present a collection of open-source contributions of our team.