Goto

Collaborating Authors

 gratch


Emotionally-Aware Agents for Dispute Resolution

Rakshit, Sushrita, Hale, James, Chawla, Kushal, Brett, Jeanne M., Gratch, Jonathan

arXiv.org Artificial Intelligence

--In conflict, people use emotional expressions to shape their counterparts' thoughts, feelings, and actions. This paper explores whether automatic text emotion recognition offers insight into this influence in the context of dispute resolution. Prior work has shown the promise of such methods in negotiations; however, disputes evoke stronger emotions and different social processes. We use a large corpus of buyer-seller dispute dialogues to investigate how emotional expressions shape subjective and objective outcomes. We further demonstrate that large-language models yield considerably greater explanatory power than previous methods for emotion intensity annotation and better match the decisions of human annotators. Findings support existing theoretical models for how emotional expressions contribute to conflict escalation and resolution and suggest that agent-based systems could be useful in managing disputes by recognizing and potentially mitigating emotional escalation. Emotional expressions serve essential social functions in human relationships. They convey one's beliefs, desires, and intentions -- shaping the beliefs, desires, and intentions of interaction partners [1], [2]. People high in emotional intelligence achieve more success in navigating emotional relationships [3], and there exists growing interest in creating AI agents that understand and enact these social functions [4], [5]. Prior work suggests that emotionally-aware agents are suitable for a growing list of applications, including teaching people to convey emotions effectively [6], improving human-agent interaction [7], detecting and moderating toxic communication [8], and serving as methodological tools for studying human emotion [9]. This paper examines the capacity of agents to understand human emotional expressions in the context of text-based dispute resolution. Disputes arise when one party in a relationship (an individual, group, or nation) levies a claim that another party refuses to accept, thus threatening the future of the relationship [10].


How Can Driverless Cars Take Account of Human Selfishness?

#artificialintelligence

Psychologists have long found that people behave differently than when they learn of peers' actions. A new study by computer scientists found that when individuals in an experiment about autonomous vehicles were informed that their peers were more likely to sacrifice their own safety to program their vehicle to hit a wall rather than hit pedestrians who were at risk, the percentage of individuals willing to sacrifice their own safety increased by approximately two-thirds. As computer scientists train machines to act as people's agents in all sorts of situations, the study's authors indicate that the social component of decision-making is often overlooked. This could be of great consequence, note the paper's authors who show that the trolly problem -long shown to be the scenario moral psychologists turn to--is problematic. The problem, the authors indicate, fails to show the complexity of how humans make decisions.


What might sheep and driverless cars have in common? Following the herd

#artificialintelligence

Psychologists have long found that people behave differently than when they learn of peers' actions. A new study by computer scientists found that when individuals in an experiment about autonomous vehicles were informed that their peers were more likely to sacrifice their own safety to program their vehicle to hit a wall rather than hit pedestrians who were at risk, the percentage of individuals willing to sacrifice their own safety increased by approximately two-thirds. As computer scientists train machines to act as people's agents in all sorts of situations, the study's authors indicate that the social component of decision-making is often overlooked. This could be of great consequence, note the paper's authors who show that the trolly problem -long shown to be the scenario moral psychologists turn to--is problematic. The problem, the authors indicate, fails to show the complexity of how humans make decisions.


What Might Sheep and Driverless Cars Have in Common? Following the Herd. - USC Viterbi

#artificialintelligence

Psychologists have long found that people behave differently than when they learn of peers' actions. A new study by computer scientists found that when individuals in an experiment about autonomous vehicles were informed that their peers were more likely to sacrifice their own safety to program their vehicle hit a wall rather than hit pedestrians who were at risk, the percentage of individuals willing to sacrifice their own safety increased by approximately two-thirds. As computer scientists train machines to act as people's agents in all sorts of situations, the study's authors indicate that the social component of decision-making is often overlooked. This could be of great consequence, note the paper's authors who show that the trolly problem –long shown to be the scenario moral psychologists turn to--is problematic. The problem, the authors indicate, fails to show the complexity of how humans make decisions.


What does the future of artificial intelligence mean for humans? - ScienceBlog.com

#artificialintelligence

The first question many people ask about artificial intelligence (AI) is, "Will it be good or bad?" The answer is … yes. Canadian company BlueDot used AI technology to detect the novel coronavirus outbreak in Wuhan, China, just hours after the first cases were diagnosed. Compiling data from local news reports, social media accounts and government documents, the infectious disease data analytics firm warned of the emerging crisis a week before the World Health Organization made any official announcement. While predictive algorithms could help us stave off pandemics or other global threats as well as manage many of our day-to-day challenges, AI's ultimate impact is impossible to predict.


What does the future of artificial intelligence mean for humans?

#artificialintelligence

The first question many people ask about artificial intelligence (AI) is, "Will it be good or bad?" The answer is … yes. Canadian company BlueDot used AI technology to detect the novel coronavirus outbreak in Wuhan, China, just hours after the first cases were diagnosed. Compiling data from local news reports, social media accounts and government documents, the infectious disease data analytics firm warned of the emerging crisis a week before the World Health Organization made any official announcement. While predictive algorithms could help us stave off pandemics or other global threats as well as manage many of our day-to-day challenges, AI's ultimate impact is impossible to predict.


AI Isn't Good at Detecting Liars through Their Facial Expressions

#artificialintelligence

Technologies are increasingly being used to shape public policy, business, and people's lives. AI court judges are helping to decide criminal's sentences and AI is being used to catch murder suspects and even shape your insurance policy. That's why the fact that computers aren't great at detecting lies should be a worry. Researchers from the USC Institute for Creative Technologies recently put AI's capability for lie detection to the test, and the test results left a lot to be desired. The USC Institute for Creative Technologies research team recently tested algorithms using basic tests for truth detectors and found that the AIs failed these tests.


When do Robots Look Too Creepy?

#artificialintelligence

An increasing number of robots are being created and designed to work side by side with humans, in a human environment. That means robots have to be structured like a person, because some of them have to walk and sit like a person. Some robots are even being designed to look human. But seeing an android, a robot that looks human, can make some people uneasy. That growing unsettling feeling or phenomenon as robots begin to look more like human beings is called the "uncanny valley." Even researchers who work on robots are not immune to it.


Lessons Learned from Virtual Humans

AI Magazine

Over the past decade, we have been engaged in an extensive research effort to build virtual humans and applications that use them. Building a virtual human might be considered the quintessential AI problem, because it brings together many of the key features, such as autonomy, natural communication, and sophisticated reasoning and behavior, that distinguish AI systems. This article describes major virtual human systems we have built and important lessons we have learned along the way. Early on, we decided to focus on training human-oriented skills, such as leadership, negotiation, and cultural awareness. These skills are based on what is sometimes called tacit knowledge (Sternberg 2000), that is, knowledge that is not easily explicated or taught in a classroom setting but instead is best learned through experience.


Smiling during victory could hurt future chances of cooperation

#artificialintelligence

In a winning scenario, smiling can decrease your odds of success against the same opponent in subsequent matches, according to new research presented by the USC Institute for Creative Technologies and sponsored by the U.S. Army Research Laboratory. People who smiled during victory increased the odds of their opponent acting aggressively to steal a pot of money rather than share it in future gameplay, according to a paper presented in May at the International Conference on Autonomous Agents and Multiagent Systems by USC ICT research assistant Rens Hoegen, USC ICT research programmer Giota Stratou and Jonathan Gratch, director of virtual humans research at USC ICT and a professor of computer science at the USC Viterbi School of Engineering. Conversely, researchers found smiling during a loss tended to help the odds of success in the game going forward. The study is in line with previous research published by senior author Gratch, whose main interest lies both in how people express these tells -- an unconscious action that betrays deception -- and using this data to create artificial intelligence to discern and even express these same emotional cues as a person. "We think that emotion is the enemy of reason. But the truth is that emotion is our way of assigning value to things," said Gratch.