Harnessing the power of LLMs for normative reasoning in MASs
Savarimuthu, Bastin Tony Roy, Ranathunga, Surangika, Cranefield, Stephen
–arXiv.org Artificial Intelligence
Software agents, both human and computational, do not exist in isolation and often need to collaborate or coordinate with others to achieve their goals. In human society, social mechanisms such as norms ensure efficient functioning, and these techniques have been adopted by researchers in multi-agent systems (MAS) to create socially aware agents. However, traditional techniques have limitations, such as operating in limited environments often using brittle symbolic reasoning. The advent of Large Language Models (LLMs) offers a promising solution, providing a rich and expressive vocabulary for norms and enabling norm-capable agents that can perform a range of tasks such as norm discovery, normative reasoning and decision-making. This paper examines the potential of LLM-based agents to acquire normative capabilities, drawing on recent Natural Language Processing (NLP) and LLM research. We present our vision for creating normative LLM agents. In particular, we discuss how the recently proposed "LLM agent" approaches can be extended to implement such normative LLM agents. We also highlight challenges in this emerging field. This paper thus aims to foster collaboration between MAS, NLP and LLM researchers in order to advance the field of normative agents.
arXiv.org Artificial Intelligence
Mar-25-2024
- Country:
- Asia
- Indonesia > Bali (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Singapore (0.04)
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- Slovenia > Central Slovenia
- Municipality of Ljubljana > Ljubljana (0.04)
- Denmark > Capital Region
- North America > Canada
- Oceania > New Zealand
- North Island > Auckland Region
- Auckland (0.04)
- South Island > Otago
- Dunedin (0.04)
- North Island > Auckland Region
- Asia
- Genre:
- Research Report (1.00)
- Technology: