Multi-Agent, Human-Agent and Beyond: A Survey on Cooperation in Social Dilemmas
Guo, Hao, Mu, Chunjiang, Chen, Yang, Shen, Chen, Hu, Shuyue, Wang, Zhen
–arXiv.org Artificial Intelligence
Social dilemmas (SDs, e.g., prisoner's dilemma), spanning various domains including environmental pollution, public health crises, and resource management, present a fundamental conflict between personal interests and the common good [Nowak, 2006]. While cooperation is beneficial for the collective, individuals are tempted to exploit or free-ride others' efforts, potentially leading to a tragedy of the commons. Historically rooted in the study of biological altruism [Smith, 1982], the traditional research on cooperation in SDs has unveiled the pivotal roles of reciprocity and social preferences in fostering cooperative behaviors in human societies [Fehr et al., 2002; Rand and Nowak, 2013]. Recently, propelled by advances in artificial intelligence (AI), this field has been undergoing a profound transformation--as AI agents now increasingly represent and engage with humans, our understanding of how cooperation emerges, evolves, and sustains in SDs is being significantly reshaped. This is particularly evident in two lines of research: multi-agent cooperation, where AI agents interact with each other in SDs, and human-agent cooperation, which examines the intricacies of human interactions with AI agents in SDs.
arXiv.org Artificial Intelligence
Feb-27-2024
- Country:
- Asia (0.28)
- Genre:
- Overview (1.00)
- Research Report > New Finding (1.00)
- Industry:
- Leisure & Entertainment > Games (0.68)
- Social Sector (0.63)
- Technology: