Alignment of Language Agents
Kenton, Zachary, Everitt, Tom, Weidinger, Laura, Gabriel, Iason, Mikulik, Vladimir, Irving, Geoffrey
–arXiv.org Artificial Intelligence
For artificial intelligence to be beneficial to humans the behaviour of AI agents needs to be aligned with what humans want. In this paper we discuss some behavioural issues for language agents, arising from accidental misspecification by the system designer. We highlight some ways that misspecification can occur and discuss some behavioural issues that could arise from misspecification, including deceptive or manipulative language, and review some approaches for avoiding these issues.
arXiv.org Artificial Intelligence
Mar-26-2021
- Country:
- Asia > Indonesia
- Bali (0.04)
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- North America > United States
- Virginia > Arlington County > Arlington (0.04)
- Asia > Indonesia
- Genre:
- Research Report (0.82)
- Industry:
- Law (0.93)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (0.67)
- Machine Learning > Reinforcement Learning (0.68)
- Natural Language
- Chatbot (0.68)
- Large Language Model (0.70)
- Representation & Reasoning > Agents (0.48)
- Information Technology > Artificial Intelligence