A Statistical Case Against Empirical Human-AI Alignment

Rodemann, Julian, Arias, Esteban Garces, Luther, Christoph, Jansen, Christoph, Augustin, Thomas

arXiv.org Artificial Intelligence 

Empirical human-AI alignment aims to make AI systems act in line with observed human behavior. While noble in its goals, we argue that empirical alignment can inadvertently introduce statistical biases that warrant caution. This position paper thus advocates against naive empirical alignment, offering prescriptive alignment and a posteriori empirical alignment as alternatives. We substantiate our principled argument by tangible examples like human-centric decoding of language models.