CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models
Ormazabal, Aitor, Artetxe, Mikel, Agirre, Eneko
–arXiv.org Artificial Intelligence
Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes through inference APIs. Even when the model weights are available, the computational cost of fine-tuning large LMs can be prohibitive for most practitioners. In this work, we present a lightweight method for adapting large LMs to new domains and tasks, assuming no access to their weights or intermediate activations. Our approach fine-tunes a small white-box LM and combines it with the large black-box LM at the probability level through a small network, learned on a small validation set. We validate our approach by adapting a large LM (OPT-30B) to several domains and a downstream task (machine translation), observing improved performance in all cases, of up to 9%, while using a domain expert 23x smaller.
arXiv.org Artificial Intelligence
May-23-2023
- Country:
- Asia > Middle East
- UAE (0.04)
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Romania > Sud - Muntenia Development Region
- Giurgiu County > Giurgiu (0.04)
- Spain > Basque Country (0.04)
- Denmark > Capital Region
- North America > United States
- Kansas > Cowley County (0.04)
- New York > New York County
- New York City (0.04)
- Pennsylvania (0.04)
- Washington > King County
- Seattle (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Transportation > Air (0.84)
- Technology: