An Anthropologist LLM to Elicit Users' Moral Preferences through Role-Play

De Ninno, Gianluca, Inverardi, Paola, Belotti, Francesca

arXiv.org Artificial Intelligence 

GPT can predict users' future decisions by analyzing narrative tables, with accuracy further improved when guided by an anthropological framework. Moreover, by integrating contextual knowledge and an interpretative lens into LLMs, this approach enhances AI explainability while ensuring a human-centric perspective in requirement elicitation. By asking GPT to generate a user profile, it becomes possible to directly assess what the model has understood about the user and how it represents them. Furthermore, since the model is not only tasked with predicting users' responses in new scenarios but also with justifying its choices, it is possible, on one hand, to understand the rationale behind the model's output and, on the other, to identify potential misalignments between the model's prediction and the user's actual values and preferences. This enables targeted interventions to improve alignment between the LLM and the user profile, creating a continuous feedback loop that involves both the user and the LLM trained to interpret data through an anthropological lens. The process strengthens the model's interpretability, ethical alignment, and predictive adaptability, thereby making AI systems more transparent and attuned to real-world human values. Ultimately, the approach lays the groundwork for AI assistants capable of recognizing and adapting to individuals' soft ethics and ethical decision-making process. B. Threat to V alidity We discuss threats to validity following the qualitative research framework proposed in [72]--namely, credibility, transferability, dependability, and confirmability.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found