Do we trust artificial intelligence agents to mediate conflict? Not entirely: New study says we'll listen to virtual agents except when goings get tough

#artificialintelligence 

Researchers from USC and the University of Denver created a simulation in which a three-person team was supported by a virtual agent avatar on screen in a mission that was designed to ensure failure and elicit conflict. The study was designed to look at virtual agents as potential mediators to improve team collaboration during conflict mediation. But in the heat of the moment, will we listen to virtual agents? While some of researchers (Gale Lucas and Jonathan Gratch of the USC Viterbi School Engineering and the USC Institute for Creative Technologies who contributed to this study), had previously found that one-on-one human interactions with a virtual agent therapist yielded more confessions, in this study "Conflict Mediation in Human-Machine Teaming: Using a Virtual Agent to Support Mission Planning and Debriefing," team members were less likely to engage with a male virtual agent named "Chris" when conflict arose. Participating members of the team did not physically accost the device (as we have seen humans attack robots in viral social media posts), but rather were less engaged and less likely to listen to the virtual agent's input once failure ensued and conflict arose among team members. The study was conducted in a military academy environment in which 27 scenarios were engineered to test how the team that included a virtual agent would react to failure and the ensuring conflict.