Controllable Text Generation in the Instruction-Tuning Era
Ashok, Dhananjay, Poczos, Barnabas
–arXiv.org Artificial Intelligence
While most research on controllable text generation has focused on steering base Language Models, the emerging instruction-tuning and prompting paradigm offers an alternate approach to controllability. We compile and release ConGenBench, a testbed of 17 different controllable generation tasks, using a subset of it to benchmark the performance of 9 different baselines and methods on Instruction-tuned Language Models. To our surprise, we find that prompting-based approaches outperform controllable text generation methods on most datasets and tasks, highlighting a need for research on controllable text generation with Instruction-tuned Language Models in specific. Prompt-based approaches match human performance on most stylistic tasks while lagging on structural tasks, foregrounding a need to study more varied constraints and more challenging stylistic tasks. To facilitate such research, we provide an algorithm that uses only a task dataset and a Large Language Model with in-context capabilities to automatically generate a constraint dataset. This method eliminates the fields dependence on pre-curated constraint datasets, hence vastly expanding the range of constraints that can be studied in the future.
arXiv.org Artificial Intelligence
May-2-2024
- Country:
- Asia > Middle East
- Iraq (0.14)
- North America > United States
- Oregon (0.14)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Government (1.00)
- Health & Medicine (0.96)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.93)
- Leisure & Entertainment (0.67)
- Media (0.67)
- Technology: