Self-supervised Graph Masking Pre-training for Graph-to-Text Generation
–arXiv.org Artificial Intelligence
Large-scale pre-trained language models (PLMs) have advanced Graph-to-Text (G2T) generation by processing the linearised version of a graph. However, the linearisation is known to ignore the structural information. Additionally, PLMs are typically pre-trained on free text which introduces domain mismatch between pre-training and downstream G2T generation tasks. To address these shortcomings, we propose graph masking pre-training strategies that neither require supervision signals nor adjust the architecture of the underlying pre-trained encoder-decoder model. When used with a pre-trained T5, our approach achieves new state-of-the-art results on WebNLG+2020 and EventNarrative G2T generation datasets. Our method also shows to be very effective in the low-resource setting.
arXiv.org Artificial Intelligence
Oct-19-2022
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- Asia
- Japan > Honshū
- Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Middle East > Jordan (0.04)
- South Korea > Busan
- Busan (0.04)
- Japan > Honshū
- Europe
- North America
- Dominican Republic (0.04)
- United States
- California
- Los Angeles County > Long Beach (0.04)
- San Diego County > San Diego (0.04)
- San Francisco County > San Francisco (0.05)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Michigan > Wayne County
- Detroit (0.05)
- Mississippi (0.04)
- New York (0.07)
- California
- South America > Chile
- Africa > Ethiopia
- Genre:
- Research Report (0.40)
- Technology: