Multi-Document Summarization with Centroid-Based Pretraining
Puduppully, Ratish, Jain, Parag, Chen, Nancy F., Steedman, Mark
–arXiv.org Artificial Intelligence
In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary. In this paper, we focus on pretraining objectives for MDS. Specifically, we introduce a novel pretraining objective, which involves selecting the ROUGE-based centroid of each document cluster as a proxy for its summary. Our objective thus does not require human written summaries and can be utilized for pretraining on a dataset consisting solely of document sets. Through zero-shot, few-shot, and fully supervised experiments on multiple MDS datasets, we show that our model Centrum is better or comparable to a state-of-the-art model. We make the pretrained and fine-tuned models freely available to the research community https://github.com/ratishsp/centrum.
arXiv.org Artificial Intelligence
May-31-2023
- Country:
- Asia > Singapore (0.05)
- Europe
- North America
- Canada > Quebec
- Montreal (0.04)
- Dominican Republic (0.04)
- United States
- Massachusetts > Suffolk County
- Boston (0.04)
- Montana > Cascade County
- Great Falls (0.04)
- New York > New York County
- New York City (0.04)
- Massachusetts > Suffolk County
- Canada > Quebec
- Genre:
- Research Report (1.00)
- Industry:
- Government > Regional Government (0.47)
- Media (0.69)
- Technology: