Language Model Sentence Completion with a Parser-Driven Rhetorical Control Method

Zingale, Joshua, Kalita, Jugal

arXiv.org Artificial Intelligence 

Controlled text generation (CTG) seeks to guide large language model (LLM) output to produce text that conforms to desired criteria. The current study presents a novel CTG algorithm that enforces adherence toward specific rhetorical relations in an LLM sentence-completion context by a parser-driven decoding scheme that requires no model fine-tuning. The method is validated both with automatic and human evaluation. The code is accessible on GitHub.