Inertial Coordination Games
Koh, Andrew, Li, Ricky, Uzui, Kei
–arXiv.org Artificial Intelligence
We analyze inertial coordination games: dynamic coordination games with an endogenously changing state that depends on (i) a persistent fundamental that players privately learn about; and (ii) past play. We give a tight characterization of how the speed of learning shapes equilibrium dynamics: the risk-dominant action is selected in the limit if and only if learning is slow such that posterior precisions grow sub-quadratically. This generalizes results from static global games and endows them with an alternate learning foundation. Conversely, when learning is fast, equilibrium dynamics exhibit persistence and limit play is shaped by initial play. Whenever the risk dominant equilibrium is selected, the path of play undergoes a sudden transition when signals are precise, and a gradual transition when signals are noisy.
arXiv.org Artificial Intelligence
Sep-12-2024
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom
- Genre:
- Research Report (1.00)
- Industry:
- Banking & Finance (0.46)
- Technology: