PromptSum: Parameter-Efficient Controllable Abstractive Summarization
Ravaut, Mathieu, Chen, Hailin, Zhao, Ruochen, Qin, Chengwei, Joty, Shafiq, Chen, Nancy
–arXiv.org Artificial Intelligence
Prompt tuning (PT), a parameter-efficient technique that only tunes the additional prompt embeddings while keeping the backbone pre-trained language model (PLM) frozen, has shown promising results in language understanding tasks, especially in low-resource scenarios. However, effective prompt design methods suitable for generation tasks such as summarization are still lacking. At the same time, summarization guided through instructions (discrete prompts) can achieve a desirable double objective of high quality and controllability in summary generation. Towards a goal of strong summarization performance under the triple conditions of parameter-efficiency, data-efficiency, and controllability, we introduce PromptSum, a method combining PT with a multi-task objective and discrete entity prompts for abstractive summarization. Our model achieves competitive ROUGE results on popular abstractive summarization benchmarks coupled with a strong level of controllability through entities, all while only tuning several orders of magnitude less parameters.
arXiv.org Artificial Intelligence
Aug-6-2023
- Country:
- North America > United States > Michigan (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Technology: