SPRI: Aligning Large Language Models with Context-Situated Principles
Zhan, Hongli, Azmat, Muneeza, Horesh, Raya, Li, Junyi Jessy, Yurochkin, Mikhail
–arXiv.org Artificial Intelligence
Aligning Large Language Models to integrate and reflect human values, especially for tasks that demand intricate human oversight, is arduous since it is resource-intensive and time-consuming to depend on human expertise for context-specific guidance. Prior work has utilized predefined sets of rules or principles to steer the behavior of models (Bai et al., 2022; Sun et al., 2023). However, these principles tend to be generic, making it challenging to adapt them to each individual input query or context. In this work, we present Situated-PRInciples (SPRI), a framework requiring minimal or no human effort that is designed to automatically generate guiding principles in real-time for each input query and utilize them to align each response. We evaluate SPRI on three tasks, and show that 1) SPRI can derive principles in a complex domain-specific task that leads to on-par performance as expert-crafted ones; 2) SPRI-generated principles lead to instance-specific rubrics that outperform prior LLM-as-a-judge frameworks; 3) using SPRI to generate synthetic SFT data leads to substantial improvement on truthfulness. We release our code and model generations at https://github.com/honglizhan/SPRI-public.
arXiv.org Artificial Intelligence
Feb-5-2025
- Country:
- Asia (1.00)
- Europe (1.00)
- North America > United States
- Texas > Travis County > Austin (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Health & Medicine
- Consumer Health (1.00)
- Therapeutic Area > Psychiatry/Psychology (1.00)
- Information Technology (0.67)
- Health & Medicine
- Technology: