Antos, Dimitrios, Pfeffer, Avi

We present an algorithm that identifies the reasoning patterns of agents in a game, by iteratively examining the graph structure of its Multi-Agent Influence Diagram (MAID) representation. If the decision of an agent participates in no reasoning patterns, then we can effectively ignore that decision for the purpose of calculating a Nash equilibrium for the game. In some cases, this can lead to exponential time savings in the process of equilibrium calculation. Moreover, our algorithm can be used to enumerate the reasoning patterns in a game, which can be useful for constructing more effective computerized agents interacting with humans.

Aziz, Haris, Chan, Hau, Lee, Barton E., Li, Bo, Walsh, Toby

We consider the facility location problem in the one-dimensional setting where each facility can serve a limited number of agents from the algorithmic and mechanism design perspectives. From the algorithmic perspective, we prove that the corresponding optimization problem, where the goal is to locate facilities to minimize either the total cost to all agents or the maximum cost of any agent is NP-hard. However, we show that the problem is fixed-parameter tractable, and the optimal solution can be computed in polynomial time whenever the number of facilities is bounded, or when all facilities have identical capacities. We then consider the problem from a mechanism design perspective where the agents are strategic and need not reveal their true locations. We show that several natural mechanisms studied in the uncapacitated setting either lose strategyproofness or a bound on the solution quality for the total or maximum cost objective. We then propose new mechanisms that are strategyproof and achieve approximation guarantees that almost match the lower bounds.

This position paper discusses the problem of locally evaluating and comparing candidate schedules, in the context of a distributed scheduling task operating in unbounded environments in which each agent selfishly serves the desires and preferences of its own user. Distributed scheduling systems have used a variety of mechanisms to maximize the quality of the global solution. Techniques include negotiation frameworks based on market economies [Kis et al. 1996], game theoretic algorithms, and global or shared evaluation functions [Modi et al. 2005]. Our position is that these techniques do not adequately address the situation where self-interested cooperative agents maintain schedules on behalf of users operating in unbounded environments. Here, satisfaction is more important than optimality [Palen 1999], and personalized preferences are paramount to users [Berry et al. 2005a; Faulring and Myers 2005]. In our approach, global utility is secondary; the agent aims to maximize the utility of its user, but folds the positive utility of cooperating with others into that utility. Thus, we treat the utility of others as a single component of a personalized and context-dependent multi-criteria evaluation function, which the agent learns through interactions with its user.

In this paper, we consider a class of finite-sum convex optimization problems defined over a distributed multiagent network with $m$ agents connected to a central server. In particular, the objective function consists of the average of $m$ ($\ge 1$) smooth components associated with each network agent together with a strongly convex term. Our major contribution is to develop a new randomized incremental gradient algorithm, namely random gradient extrapolation method (RGEM), which does not require any exact gradient evaluation even for the initial point, but can achieve the optimal ${\cal O}(\log(1/\epsilon))$ complexity bound in terms of the total number of gradient evaluations of component functions to solve the finite-sum problems. Furthermore, we demonstrate that for stochastic finite-sum optimization problems, RGEM maintains the optimal ${\cal O}(1/\epsilon)$ complexity (up to a certain logarithmic factor) in terms of the number of stochastic gradient computations, but attains an ${\cal O}(\log(1/\epsilon))$ complexity in terms of communication rounds (each round involves only one agent). It is worth noting that the former bound is independent of the number of agents $m$, while the latter one only linearly depends on $m$ or even $\sqrt m$ for ill-conditioned problems. To the best of our knowledge, this is the first time that these complexity bounds have been obtained for distributed and stochastic optimization problems. Moreover, our algorithms were developed based on a novel dual perspective of Nesterov's accelerated gradient method.

Ramdas, Aaditya (Carnegie Mellon University)

My research goal involves simultaneously addressing statistical and computational tradeoffs encountered in modern data analysis and high-dimensional machine learning (eg: hypothesis testing, regression, classification). My future interests include incorporating additional constraints like privacy or communication, and settings involving hidden utilities of multiple cooperative agents or competitive adversaries.