Relational Norms for Human-AI Cooperation
Earp, Brian D., Mann, Sebastian Porsdam, Aboy, Mateo, Awad, Edmond, Betzler, Monika, Botes, Marietjie, Calcott, Rachel, Caraccio, Mina, Chater, Nick, Coeckelbergh, Mark, Constantinescu, Mihaela, Dabbagh, Hossein, Devlin, Kate, Ding, Xiaojun, Dranseika, Vilius, Everett, Jim A. C., Fan, Ruiping, Feroz, Faisal, Francis, Kathryn B., Friedman, Cindy, Friedrich, Orsolya, Gabriel, Iason, Hannikainen, Ivar, Hellmann, Julie, Jahrome, Arasj Khodadade, Janardhanan, Niranjan S., Jurcys, Paul, Kappes, Andreas, Khan, Maryam Ali, Kraft-Todd, Gordon, Dale, Maximilian Kroner, Laham, Simon M., Lange, Benjamin, Leuenberger, Muriel, Lewis, Jonathan, Liu, Peng, Lyreskog, David M., Maas, Matthijs, McMillan, John, Mihailov, Emilian, Minssen, Timo, Monrad, Joshua Teperowski, Muyskens, Kathryn, Myers, Simon, Nyholm, Sven, Owen, Alexa M., Puzio, Anna, Register, Christopher, Reinecke, Madeline G., Safron, Adam, Shevlin, Henry, Shimizu, Hayate, Treit, Peter V., Voinea, Cristina, Yan, Karen, Zahiu, Anda, Zhang, Renwen, Zohny, Hazem, Sinnott-Armstrong, Walter, Singh, Ilina, Savulescu, Julian, Clark, Margaret S.
–arXiv.org Artificial Intelligence
How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These norms shape our judgments of what is appropriate for each partner. For example, workplace norms may allow a boss to give orders to an employee, but not vice versa, reflecting hierarchical and transactional expectations. As AI agents and chatbots powered by large language models are increasingly designed to serve roles analogous to human positions - such as assistant, mental health provider, tutor, or romantic partner - it is imperative to examine whether and how human relational norms should extend to human-AI interactions. Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI's capacity to fulfill relationship-specific functions and adhere to corresponding norms. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.
arXiv.org Artificial Intelligence
Feb-17-2025
- Country:
- Asia (1.00)
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.28)
- North America > United States (1.00)
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Education > Educational Setting (1.00)
- Government
- Military (1.00)
- Regional Government (1.00)
- Health & Medicine > Therapeutic Area
- Psychiatry/Psychology > Mental Health (1.00)
- Information Technology > Security & Privacy (1.00)
- Law > Criminal Law (0.92)
- Media (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (1.00)
- Issues > Social & Ethical Issues (1.00)
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (1.00)
- Large Language Model (1.00)
- Representation & Reasoning
- Agents (1.00)
- Personal Assistant Systems (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence