Goto

Collaborating Authors

 kleiman-weiner


AI Social Media Users Are Not Always a Totally Dumb Idea

WIRED

Meta caused a stir last week when it let slip that it intends to populate its platform with a significant number of entirely artificial users in the not too distant future. "We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do," Connor Hayes, vice-president of product for generative AI at Meta, told The Financial Times. "They'll have bios and profile pictures and be able to generate and share content powered by AI on the platform ... that's where we see all of this going." The fact that Meta seems happy to fill its platform with AI slop and accelerate the "enshittification" of the internet as we know it is concerning. Some people then noticed that Facebook was in fact already awash with strange AI-generated individuals, most of which stopped posting a while ago.


Modeling Communication to Coordinate Perspectives in Cooperation

Stacy, Stephanie, Li, Chenfei, Zhao, Minglu, Yun, Yiling, Zhao, Qingyi, Kleiman-Weiner, Max, Gao, Tao

arXiv.org Artificial Intelligence

Communication is highly overloaded. Despite this, even young children are good at leveraging context to understand ambiguous signals. We propose a computational account of overloaded signaling from a shared agency perspective which we call the Imagined We for Communication. Under this framework, communication helps cooperators coordinate their perspectives, allowing them to act together to achieve shared goals. We assume agents are rational cooperators, which puts constraints on how signals can be sent and interpreted. We implement this model in a set of simulations demonstrating this model's success under increasing ambiguity as well as increasing layers of reasoning. Our model is capable of improving performance with deeper recursive reasoning; however, it outperforms comparison baselines at even the shallowest level, highlighting how shared knowledge and cooperative logic can do much of the heavy-lifting in language.


Bot can beat humans in multiplayer hidden-role games

#artificialintelligence

MIT researchers have developed a bot equipped with artificial intelligence that can beat human players in tricky online multiplayer games where player roles and motives are kept secret. Many gaming bots have been built to keep up with human players. Earlier this year, a team from Carnegie Mellon University developed the world's first bot that can beat professionals in multiplayer poker. DeepMind's AlphaGo made headlines in 2016 for besting a professional Go player. Several bots have also been built to beat professional chess players or join forces in cooperative games such as online capture the flag.


Bot can beat humans in multiplayer hidden-role games

#artificialintelligence

MIT researchers have developed a bot equipped with artificial intelligence that can beat human players in tricky online multiplayer games where player roles and motives are kept secret. Many gaming bots have been built to keep up with human players. Earlier this year, a team from Carnegie Mellon University developed the world's first bot that can beat professionals in multiplayer poker. DeepMind's AlphaGo made headlines in 2016 for besting a professional Go player. Several bots have also been built to beat professional chess players or join forces in cooperative games such as online capture the flag.


Deep Tractable Probabilistic Models for Moral Responsibility

Hammond, Lewis, Belle, Vaishak

arXiv.org Artificial Intelligence

Moral responsibility is a major concern in automated decision-making, with applications ranging from self-driving cars to kidney exchanges. From the viewpoint of automated systems, the urgent questions are: (a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data? (b) How can judgements be computed tractably, given the split-second decision points faced by the system? By building on deep tractable probabilistic learning, we propose a learning regime for inducing models of such scenarios automatically from data and reasoning tractably from them. We report on experiments that compare our system with human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems.


The Limits of Morality in Strategic Games

Cao, Rui, Naumov, Pavel

arXiv.org Artificial Intelligence

A coalition is blameable for an outcome if the coalition had a strategy to prevent it. It has been previously suggested that the cost of prevention, or the cost of sacrifice, can be used to measure the degree of blameworthiness. The paper adopts this approach and proposes a modal logical system for reasoning about the degree of blameworthiness. The main technical result is a completeness theorem for the proposed system.