Bitan, Moshe (Bar-Ilan University, Israel) | Gal, Ya’akov (Ben-Gurion University, Israel) | Kraus, Sarit (Bar-Ilan University, Israel) | Dokow, Elad (Bar-Ilan University, Israel) | Azaria, Amos (Bar-Ilan University, Israel)
Despite committees and elections being widespread in thereal-world, the design of agents for operating in humancomputer committees has received far less attention than thetheoretical analysis of voting strategies. We address this gapby providing an agent design that outperforms other voters ingroups comprising both people and computer agents. In oursetting participants vote by simultaneously submitting a ranking over a set of candidates and the election system uses a social welfare rule to select a ranking that minimizes disagreements with participants’ votes. We ran an extensive studyin which hundreds of people participated in repeated votingrounds with other people as well as computer agents that differed in how they employ strategic reasoning in their votingbehavior. Our results show that over time, people learn todeviate from truthful voting strategies, and use heuristics toguide their play, such as repeating their vote from the previous round. We show that a computer agent using a bestresponse voting strategy was able to outperform people in thegame. Our study has implication for agent designers, highlighting the types of strategies that enable agents to succeedin committees comprising both human and computer participants. This is the ﬁrst work to study the role of computeragents in voting settings involving both human and agent participants.
Microsoft researchers have created an artificial intelligence-based system that learned how to get the maximum score on the addictive 1980s video game Ms. Pac-Man, using a divide-and-conquer method that could have broad implications for teaching AI agents to do complex tasks that augment human capabilities. The team from Maluuba, a Canadian deep learning startup acquired by Microsoft earlier this year, used a branch of AI called reinforcement learning to play the Atari 2600 version of Ms. Pac-Man perfectly. Using that method, the team achieved the maximum score possible of 999,990. Doina Precup, an associate professor of computer science at McGill University in Montreal said that's a significant achievement among AI researchers, who have been using various videogames to test their systems but have found Ms. Pac-Man among the most difficult to crack. But Precup said she was impressed not just with what the researchers achieved but with how they achieved it.
In this paper, we investigate the hypothesis that plan recognition can significantly improve the performance of a case-based reinforcement learner in an adversarial action selection task. Our environment is a simplification of an American football game. The performance task is to control the behavior of a quarterback in a pass play, where the goal is to maximize yardage gained. Plan recognition focuses on predicting the play of the defensive team. We modeled plan recognition as an unsupervised learning task, and conducted a lesion study. We found that plan recognition was accurate, and that it significantly improved performance. More generally, our studies show that plan recognition reduced the dimensionality of the state space, which allowed learning to be conducted more effectively. We describe the algorithms, explain the reasons for performance improvement, and also describe a further empirical comparison that highlights the utility of plan recognition for this task.
Modeling player engagement is a key challenge in games. However, the gameplay signatures of engaged players can be highly context-sensitive, varying based on where the game is used or what population of players is using it. Traditionally, models of player engagement are investigated in a particular context, and it is unclear how effectively these models generalize to other settings and populations. In this work, we investigate a Bayesian hierarchical linear model for multi-task learning to devise a model of player engagement from a pair of datasets that were gathered in two complementary contexts: a Classroom Study with middle school students and a Laboratory Study with undergraduate students. Both groups of players used similar versions of Crystal Island, an educational interactive narrative game for science learning. Results indicate that the Bayesian hierarchical model outperforms both pooled and context-specific models in cross-validation measures of predicting player motivation from in-game behaviors, particularly for the smaller Classroom Study group. Further, we find that the posterior distributions of model parameters indicate that the coefficient for a measure of gameplay performance significantly differs between groups. Drawing upon their capacity to share information across groups, hierarchical Bayesian methods provide an effective approach for modeling player engagement with data from similar, but different, contexts.
Ever since the launch of Siri in its fully-integrated form on the iPhone 4s in 2011, digital assistants have become standard features on most modern smartphones. With competition growing from Microsoft, Google, and Amazon with their Cortana, Google Now, and Echo respectively, Apple continues providing updates to Siri in an attempt to find a semblance of functional advantage. As Google recently added changes to its digital assistant – Google Now – with smart intonation and expression to its speech patterns to sound less robotic, Apple has followed with their own updates looking to target Siri towards specific audiences; in this case, sports fans.