Multi-Document Summarization with Centroid-Based Pretraining
Puduppully, Ratish, Jain, Parag, Chen, Nancy F., Steedman, Mark
–arXiv.org Artificial Intelligence
In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary. In this paper, we focus on pretraining objectives for MDS. Specifically, we introduce a novel pretraining objective, which involves selecting the ROUGE-based centroid of each document cluster as a proxy for its summary. Our objective thus does not require human written summaries and can be utilized for pretraining on a dataset consisting solely of document sets. Through zero-shot, few-shot, and fully supervised experiments on multiple MDS datasets, we show that our model Centrum is better or comparable to a state-of-the-art model. We make the pretrained and fine-tuned models freely available to the research community https://github.com/ratishsp/centrum.
arXiv.org Artificial Intelligence
May-31-2023
- Country:
- Europe (1.00)
- North America
- Canada > Quebec (0.14)
- United States
- Massachusetts (0.14)
- Montana (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Government > Regional Government (0.47)
- Media (0.69)
- Technology: