Much of the economic value of electronic commerce comes from the automation of interactions between businesses and individuals. Game theory is a useful set of tools that can be used by designers of electronic-commerce applications in analyzing and engineering of automated agents and communication protocols. The central theoretical concept used in game theory is the Nash equilibrium. In this article, I show how the outcomes supported by a Nash equilibrium can positively be enlarged using automated negotiations.
We describe how entangled quantum states can aid in coordination, cooperation and resource allocation in multi-agent systems. These protocols provide alternatives to conventional methods, with different tradeoffs of capabilities and information privacy. We also present results of human-subject experiments with simulated versions of some of these methods, showing people can learn to use entangled states effectively without training in quantum mechanics. Thus these quantum protocols are suitable for mixed systems consisting of human and software agents. These techniques are beneficial with even a few bits and operations, making their physical implementation much easier than quantum applications to hard computational problems such as factoring or search.
We develop a model for analyzing complex games with repeated interactions, for which a full game-theoretic analysis is intractable. Our approach treats exogenously specified, heuristic strategies, rather than the atomic actions, as primitive, and computes a heuristic-payoff table specifying the expected payoffs of the joint heuristic strategy space. We analyze two games based on (i) automated dynamic pricing and (ii) continuous double auction. For each game we compute Nash equilibria of previously published heuristic strategies. To determine the most plausible equilibria, we study the replicator dynamics of a large population playing the strategies. In order to account for errors in estimation of payoffs or improvements in strategies, we also analyze the dynamics and equilibria based on perturbed payoffs.
Providing agents with strategies that will be robust against deviations by coalitions is central to the design of multi-agent agents. However, such strategies, captured by the notion of strong equilibrium, rarely exist. This paper suggests the use of mediators in order to enrich the set of situations where we can obtain stability against deviations by coalitions. A mediator is a reliable entity, which can ask the agents for the right to play on their behalf, and is guaranteed to behave in a prespecified way based on messages received from the agents. However, a mediator can not enforce behavior; that is, agents can play in the game directly without the mediator's help. We prove some general results about mediators, and concentrate on the notion of strong mediated equilibrium; we show that desired behaviors, which are stable against deviations by coalitions, can be obtained using mediators in a rich class of settings.
Multiagent learning is an important tool for long-lasting human-machine systems (HMS). Most multiagent learning algorithms to date have focused on learning a best response to the strategies of other agents in the system. While such an approach is acceptable in some domains, it is not successful in others, such as when humans and machines interact in social dilemma-like situations, such as those arising when human attention is a scarce resource shared by multiple agents. In this paper, we discuss and show (through a user study) how multiagent learning algorithms must be aware of reputational equilibrium in order to establish neglect tolerant interactions.