Template-free Prompt Tuning for Few-shot NER
Ma, Ruotian, Zhou, Xin, Gui, Tao, Tan, Yiding, Li, Linyang, Zhang, Qi, Huang, Xuanjing
–arXiv.org Artificial Intelligence
Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words. However, when applied to token-level labeling tasks such as NER, it would be time-consuming to enumerate the template queries over all potential entity spans. In this work, we propose a more elegant method to reformulate NER tasks as LM problems without any templates. Specifically, we discard the template construction process while maintaining the word prediction paradigm of pre-training models to predict a class-related pivot word (or label word) at the entity position. Meanwhile, we also explore principled ways to automatically search for appropriate label words that the pre-trained models can easily adapt to. While avoiding complicated template-based process, the proposed LM objective also reduces the gap between different objectives used in pre-training and fine-tuning, thus it can better benefit the few-shot performance. Experimental results demonstrate the effectiveness of the proposed method over bert-tagger and template-based method under few-shot setting. Moreover, the decoding speed of the proposed method is up to 1930.12 times faster than the template-based method.
arXiv.org Artificial Intelligence
Nov-23-2022
- Country:
- Asia (0.93)
- Europe (0.68)
- North America > United States
- Minnesota (0.28)
- Genre:
- Research Report > New Finding (0.48)
- Technology: