Computational Intractability of Strategizing against Online Learners
Assos, Angelos, Dagan, Yuval, Rajaraman, Nived
–arXiv.org Artificial Intelligence
Online learning algorithms are widely used in strategic multi-agent settings, including repeated auctions, contract design, and pricing competitions, where agents adapt their strategies over time. A key question in such environments is how an optimizing agent can best respond to a learning agent to improve its own long-term outcomes. While prior work has developed efficient algorithms for the optimizer in special cases - such as structured auction settings or contract design - no general efficient algorithm is known. In this paper, we establish a strong computational hardness result: unless $\mathsf{P} = \mathsf{NP}$, no polynomial-time optimizer can compute a near-optimal strategy against a learner using a standard no-regret algorithm, specifically Multiplicative Weights Update (MWU). Our result proves an $\Omega(T)$ hardness bound, significantly strengthening previous work that only showed an additive $\Theta(1)$ impossibility result. Furthermore, while the prior hardness result focused on learners using fictitious play - an algorithm that is not no-regret - we prove intractability for a widely used no-regret learning algorithm. This establishes a fundamental computational barrier to finding optimal strategies in general game-theoretic settings.
arXiv.org Artificial Intelligence
Mar-6-2025
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Education > Educational Setting
- Online (0.34)
- Leisure & Entertainment (0.46)
- Education > Educational Setting
- Technology: