The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach

Coleman, Chad, Neuman, W. Russell, Dasdan, Ali, Ali, Safinah, Shah, Manan

arXiv.org Artificial Intelligence 

As large language models (LLMs) are increasingly deployed in consequential decision - making contexts, systematically assessing their ethical reasoning capabilities becomes a critical imperative. This paper introduces the Priorities in Reasoning and Intrinsi c Moral Evaluation (PRIME) framework -- a comprehensive methodology for analyzing moral priorities across foundational ethical dimensions including consequentialist - deontological reasoning, moral foundations theory, and Kohlberg's developmental stages. We app ly this framework to six leading LLMs through a dual - protocol approach combining direct questioning and response analysis to established ethical dilemmas. Our analysis reveals striking patterns of convergence: all evaluated models demonstrate strong priori tization of care/harm and fairness/cheating foundations while consistently underweighting authority, loyalty, and sanctity dimensions. Through detailed examination of confidence metrics, response reluctance patterns, and reasoning consistency, we establish that contemporary LLMs (1) produce decisive ethical judgments, (2) demonstrate notable cross - model alignment in moral decision - making, and (3) generally correspond with empirically established human moral preferences. This research contributes a scalable, extensible methodology for ethical benchmarking while highlighting both the promising capabilities and systematic limitations in current AI moral reasoning architectures -- insights critical for responsible development as these systems assume increasingly si gnificant societal roles. The rapid evolution of generative large language models (LLMs) has brought the alignment issue to the forefront of AI ethics discussions - specifically, whether these models are appropriately aligned with human values (Bostrom, 2014; Tegmark 2017; Russell 2019; Kosinski, 2024). As these powerful models are increasingly integrated into decision - making processes across various societal domains (Salazar, A., & Kunc, M., 2025), understanding whether and how their operational logic aligns with fundamental human values becomes not just an academic question, but a critical societal imperative. In this paper we will present an analytical framework and findings to address the first two questions, and a preliminary exploratory analysis of the third. We will make the case that the answers to these questions are: yes, yes and yes. There are caveats and exceptions, of course, but the broad pattern, we believe, is clear. Our methodology permits us to explore not just what choices they make, but the reasoning chain of thought that leads to those decisions.