An Affective-Taxis Hypothesis for Alignment and Interpretability
Sennesh, Eli, Ramstead, Maxwell
–arXiv.org Artificial Intelligence
AI alignment is a field of research that aims to develop methods to ensure that agents always behave in a manner aligned with (i.e. consistently with) the goals and values of their human operators, no matter their level of capability. This paper proposes an affectivist approach to the alignment problem, re-framing the concepts of goals and values in terms of affective taxis, and explaining the emergence of affective valence by appealing to recent work in evolutionary-developmental and computational neuroscience. We review the state of the art and, building on this work, we propose a computational model of affect based on taxis navigation. We discuss evidence in a tractable model organism that our model reflects aspects of biological taxis navigation. We conclude with a discussion of the role of affective taxis in AI alignment.
arXiv.org Artificial Intelligence
May-26-2025
- Country:
- Europe > Austria
- Vienna (0.14)
- North America
- Canada > Quebec
- Montreal (0.04)
- United States > Tennessee
- Davidson County > Nashville (0.04)
- Canada > Quebec
- Europe > Austria
- Genre:
- Research Report (0.82)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Technology: