Trash Talk Hurts Performance, Even When It Comes From a Robot

#artificialintelligence

Researchers at Carnegie Mellon University have demonstrated that people who play a game with a robot suffer in performance when the robot criticizes them. Trash talking has a long and colorful history of flustering game opponents, and now researchers at Carnegie Mellon University have demonstrated that discouraging words can be perturbing even when uttered by a robot. The trash talk in the study was decidedly mild, with utterances such as "I have to say you are a terrible player," and "Over the course of the game your playing has become confused." Even so, people who played a game with the robot ― a commercially available humanoid robot known as Pepper ― performed worse when the robot discouraged them and better when the robot encouraged them. "This is one of the first studies of human-robot interaction in an environment where they are not cooperating."


A trash talking robot hurling 'mild insults' was able to put humans off their stride

Daily Mail - Science & tech

Trash talk has been part of sport and human competition for as long as people have been competitive, but now robots are getting in on the game. Researchers from Carnegie Mellon University, in Pittsburgh, Pennsylvania, programmed a robot called Pepper to use mild insults such as'you are a terrible player' and'your playing has become confused'. It would then use these insults while challenging a human to a game called'Guards and Treasures' that is designed to test rationality. Even though the robot used very mild language, the human player's performance got worse while they were being insulted, according to lead author Aaron M. Roth. The team say tests like this could help work out how humans will respond in future if a robot assistant disagrees with a command, such as over whether to buy healthy or unhealthy food.


The Impact of Humanoid Affect Expression on Human Behavior in a Game-Theoretic Setting

arXiv.org Artificial Intelligence

With the rapid development of robot and other intelligent and autonomous agents, how a human could be influenced by a robot's expressed mood when making decisions becomes a crucial question in human-robot interaction. In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human's decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting. More specifically, we create an NLP model to generate sentences that adhere to a specific affective expression profile. We use these sentences for a humanoid robot as it plays a Stackelberg security game against a human. We investigate the behavioral model of the human player.


Sticks and stones may break your bones but robot taunts will hurt you – in games at least

#artificialintelligence

People need no help doing violence to machines; reports of humans abusing machines have become a common occurrence. But it turns out machines can make matters worse for us too. With insults, they can get under our skin and rattle us, making us behave irrationally – not that humans really need much help going off the rails. A group of computer boffins from Carnegie-Mellon University recently found that when a robot playing a game against a human opponent offers discouraging comments, the bot's words influence how the person performs. In a paper titled "A Robot's Expressive Language Affects Human Strategy and Perceptions in a Competitive Game," distributed through ArXiv, CMU researchers Aaron Roth, Samantha Reig, Umang Bhatt, Jonathan Shulgach, Tamara Amin, Afsaneh Doryab, Fei Fang, and Manuela Veloso explore how comments from a Pepper humanoid robot affected human opponents in a Stackelberg Security Game called The Guards and Treasures.


Trust and Cooperation in Human-Robot Decision Making

AAAI Conferences

Trust plays a key role in social interactions, particularly when the decisions we make depend on the people we face. In this paper, we use game theory to explore whether a person’s decisions are influenced by the type of agent they interact with:human or robot. By adopting a coin entrustment game, we quantitatively measure trust and cooperation to see if such phenomena emerge differently when a person believes they are playing a robot rather than another human. We found that while people cooperate with other humans and robots at a similar rate, they grow to trust robots more completely than humans. As a possible explanation for these differences, our survey results suggest that participants perceive humans as having faculty for feelings and sympathy, whereas they perceive robots as being more precise and reliable.