Comparing Few to Rank Many: Active Human Preference Learning using Randomized Frank-Wolfe
Thekumparampil, Kiran Koshy, Hiranandani, Gaurush, Kalantari, Kousha, Sabach, Shoham, Kveton, Branislav
We study learning of human preferences from a limited comparison feedback. This task is ubiquitous in machine learning. Its applications such as reinforcement learning from human feedback, have been transformational. We formulate this problem as learning a Plackett-Luce model over a universe of $N$ choices from $K$-way comparison feedback, where typically $K \ll N$. Our solution is the D-optimal design for the Plackett-Luce objective. The design defines a data logging policy that elicits comparison feedback for a small collection of optimally chosen points from all ${N \choose K}$ feasible subsets. The main algorithmic challenge in this work is that even fast methods for solving D-optimal designs would have $O({N \choose K})$ time complexity. To address this issue, we propose a randomized Frank-Wolfe (FW) algorithm that solves the linear maximization sub-problems in the FW method on randomly chosen variables. We analyze the algorithm, and evaluate it empirically on synthetic and open-source NLP datasets.
Dec-26-2024
- Country:
- Asia > Myanmar
- Tanintharyi Region > Dawei (0.04)
- Europe
- France > Bourgogne-Franche-Comté
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- North America > United States (0.04)
- Asia > Myanmar
- Genre:
- Research Report (1.00)
- Technology: