Adapting Prompt for Few-shot Table-to-Text Generation
Guo, Zhixin, Yan, Minyxuan, Qi, Jiexing, Zhou, Jianping, He, Ziwei, Lin, Zhouhan, Zheng, Guanjie, Wang, Xinbing
–arXiv.org Artificial Intelligence
Pretrained language models (PLMs) have made remarkable progress in table-to-text generation tasks. However, the lack of domain-specific knowledge makes it challenging to bridge the topological gap between tabular data and text, especially in real-world applications with limited resources. To mitigate the limitation of insufficient labeled data, we propose a novel framework: Adapt-Prompt-to-Generate (AdaPTGen). The core insight of AdaPTGen is to adapt prompt templates of domain-specific knowledge into the model, which brings at least three benefits: (1) it injects representation of normal table-related descriptions to bridge the topological gap between tabular data and texts; (2) it enables us to use large amounts of unlabeled domain-specific knowledge fully, which can alleviate the PLMs' inherent shortcomings of lacking domain knowledge; (3) it allows us to design various tasks to explore the domain-specific knowledge. Extensive experiments and analyses are conducted on three open-domain few-shot natural language generation (NLG) data sets: Humans, Songs, and Books. Compared to previous state-of-the-art approaches, our model achieves superior performance in terms of both fluency and accuracy.
arXiv.org Artificial Intelligence
Aug-2-2023
- Country:
- Europe (0.68)
- North America > United States
- California (0.28)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Leisure & Entertainment > Sports (0.93)
- Media (1.00)
- Technology: