Goto

Collaborating Authors

 Oudah, Mayada


How AI Wins Friends and Influences People in Repeated Games With Cheap Talk

AAAI Conferences

Research has shown that a person's financial success is more dependent on the ability to deal with people than on professional knowledge. Sage advice, such as "if you can't say something nice, don't say anything at all" and principles articulated in Carnegie's classic "How to Win Friends and Influence People," offer trusted rules-of-thumb for how people can successfully deal with each other. However, alternative philosophies for dealing with people have also emerged. The success of an AI system is likewise contingent on its ability to win friends and influence people. In this paper, we study how AI systems should be designed to win friends and influence people in repeated games with cheap talk (RGCTs). We create several algorithms for playing RGCTs by combining existing behavioral strategies (what the AI does) with signaling strategies (what the AI says) derived from several competing philosophies. Via user study, we evaluate these algorithms in four RGCTs. Our results suggest sufficient properties for AIs to win friends and influence people in RGCTs.


Cooperating with Machines

arXiv.org Artificial Intelligence

Since Alan Turing envisioned Artificial Intelligence (AI) [1], a major driving force behind technical progress has been competition with human cognition. Historical milestones have been frequently associated with computers matching or outperforming humans in difficult cognitive tasks (e.g. face recognition [2], personality classification [3], driving cars [4], or playing video games [5]), or defeating humans in strategic zero-sum encounters (e.g. Chess [6], Checkers [7], Jeopardy! [8], Poker [9], or Go [10]). In contrast, less attention has been given to developing autonomous machines that establish mutually cooperative relationships with people who may not share the machine's preferences. A main challenge has been that human cooperation does not require sheer computational power, but rather relies on intuition [11], cultural norms [12], emotions and signals [13, 14, 15, 16], and pre-evolved dispositions toward cooperation [17], common-sense mechanisms that are difficult to encode in machines for arbitrary contexts. Here, we combine a state-of-the-art machine-learning algorithm with novel mechanisms for generating and acting on signals to produce a new learning algorithm that cooperates with people and other machines at levels that rival human cooperation in a variety of two-player repeated stochastic games. This is the first general-purpose algorithm that is capable, given a description of a previously unseen game environment, of learning to cooperate with people within short timescales in scenarios previously unanticipated by algorithm designers. This is achieved without complex opponent modeling or higher-order theories of mind, thus showing that flexible, fast, and general human-machine cooperation is computationally achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.


Online Learning in Repeated Human-Robot Interactions

AAAI Conferences

Adaptation is a critical component of collaboration. Nevertheless, online learning is not yet used in most successful human-robot interactions, especially when the human's and robot's goals are not fully aligned. There are at least two barriers to the successful application of online learning in HRI. First, typical machine-learning algorithms do not learn at time scales that support effective interactions with people. Algorithms that learn at sufficiently fast time scales often produce myopic strategies that do not lead to good long-term collaborations. Second, random exploration, a core component of most online-learning algorithms, can be problematic for developing collaborative relationships with a human partner. We anticipate that a new genre of online-learning algorithms can overcome these two barriers when paired with (cheap-talk) communication. In this paper, we overview our efforts in these two areas to produce a situation-independent, learning system that quickly learns to collaborate with a human partner.