Learning to Negotiate via Voluntary Commitment
Zhu, Shuhui, Wang, Baoxiang, Subramanian, Sriram Ganapathi, Poupart, Pascal
–arXiv.org Artificial Intelligence
The partial alignment and conflict of autonomous agents lead to mixed-motive scenarios in many real-world applications. However, agents may fail to cooperate in practice even when cooperation yields a better outcome. One well known reason for this failure comes from non-credible commitments. To facilitate commitments among agents for better cooperation, we define Markov Commitment Games (MCGs), a variant of commitment games, where agents can voluntarily commit to their proposed future plans. Based on MCGs, we propose a learnable commitment protocol via policy gradients. We further propose incentive-compatible learning to accelerate convergence to equilibria with better social welfare. Experimental results in challenging mixed-motive tasks demonstrate faster empirical convergence and higher returns for our method compared with its counterparts. Our code is available at https://github.com/shuhui-zhu/DCL.
arXiv.org Artificial Intelligence
Mar-19-2025
- Country:
- Asia (0.46)
- North America > Canada (0.28)
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment > Games (0.46)
- Technology: