Goto

Collaborating Authors

 Touri, Behrouz


Multi-Agent Fact Checking

arXiv.org Artificial Intelligence

We formulate the problem of fake news detection using distributed fact-checkers (agents) with unknown reliability. The stream of news/statements is modeled as an independent and identically distributed binary source (to represent true and false statements). Upon observing a news, agent $i$ labels the news as true or false which reflects the true validity of the statement with some probability $1-\pi_i$. In other words, agent $i$ misclassified each statement with error probability $\pi_i\in (0,1)$, where the parameter $\pi_i$ models the (un)trustworthiness of agent $i$. We present an algorithm to learn the unreliability parameters, resulting in a distributed fact-checking algorithm. Furthermore, we extensively analyze the discrete-time limit of our algorithm.


On the coordination efficiency of strategic multi-agent robotic teams

arXiv.org Artificial Intelligence

We study the problem of achieving decentralized coordination by a group of strategic decision makers choosing to engage or not in a task in a stochastic setting. First, we define a class of symmetric utility games that encompass a broad class of coordination games, including the popular framework known as \textit{global games}. With the goal of studying the extent to which agents engaging in a stochastic coordination game indeed coordinate, we propose a new probabilistic measure of coordination efficiency. Then, we provide an universal information theoretic upper bound on the coordination efficiency as a function of the amount of noise in the observation channels. Finally, we revisit a large class of global games, and we illustrate that their Nash equilibrium policies may be less coordination efficient then certainty equivalent policies, despite of them providing better expected utility. This counter-intuitive result, establishes the existence of a nontrivial trade-offs between coordination efficiency and expected utility in coordination games.


A Signaling Game Approach to Databases Querying and Interaction

arXiv.org Artificial Intelligence

As most database users cannot precisely express their information needs, it is challenging for database management systems to understand them. We propose a novel formal framework for representing and understanding information needs in database querying and exploration. Our framework considers querying as a collaboration between the user and the database management system to establish a it mutual language for representing information needs. We formalize this collaboration as a signaling game, where each mutual language is an equilibrium for the game. A query interface is more effective if it establishes a less ambiguous mutual language faster. We discuss some equilibria, strategies, and the convergence in this game. In particular, we propose a reinforcement learning mechanism and analyze it within our framework. We prove that this adaptation mechanism for the query interface improves the effectiveness of answering queries stochastically speaking, and converges almost surely. We extend out results for the cases that the user also modifies her strategy during the interaction.