A Statistical Case Against Empirical Human-AI Alignment
Rodemann, Julian, Arias, Esteban Garces, Luther, Christoph, Jansen, Christoph, Augustin, Thomas
–arXiv.org Artificial Intelligence
Empirical human-AI alignment aims to make AI systems act in line with observed human behavior. While noble in its goals, we argue that empirical alignment can inadvertently introduce statistical biases that warrant caution. This position paper thus advocates against naive empirical alignment, offering prescriptive alignment and a posteriori empirical alignment as alternatives. We substantiate our principled argument by tangible examples like human-centric decoding of language models.
arXiv.org Artificial Intelligence
Feb-20-2025
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe
- Austria > Vienna (0.14)
- United Kingdom > England
- Oxfordshire > Oxford (0.14)
- North America > United States (0.93)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.46)
- Technology: