Jump Starting Bandits with LLM-Generated Prior Knowledge
Alamdari, Parand A., Cao, Yanshuai, Wilson, Kevin H.
–arXiv.org Artificial Intelligence
We present substantial evidence demonstrating the benefits of integrating Large Language Models (LLMs) with a Contextual Multi-Armed Bandit framework. Contextual bandits have been widely used in recommendation systems to generate personalized suggestions based on user-specific contexts. We show that LLMs, pre-trained on extensive corpora rich in human knowledge and preferences, can simulate human behaviours well enough to jump-start contextual multi-armed bandits to reduce online learning regret. We propose an initialization algorithm for contextual bandits by prompting LLMs to produce a pre-training dataset of approximate human preferences for the bandit. This significantly reduces online learning regret and data-gathering costs for training such models. Our approach is validated empirically through two sets of experiments with different bandit setups: one which utilizes LLMs to serve as an oracle and a real-world experiment utilizing data from a conjoint survey experiment.
arXiv.org Artificial Intelligence
Jun-27-2024
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > Maryland
- Howard County > Hanover (0.04)
- Canada > Ontario
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Technology: