Serial Contrastive Knowledge Distillation for Continual Few-shot Relation Extraction

Wang, Xinyi, Wang, Zitao, Hu, Wei

arXiv.org Artificial Intelligence 

Therefore, the continual Heist and Paulheim, 2017; Zhang et al., 2018) few-shot RE paradigm (Qin and Joty, 2022) mainly assume a fixed pre-defined relation set and was proposed to simulate real human learning scenarios, train on a fixed dataset. However, they cannot work where new knowledge can be acquired from well with the new relations that continue emerging a small number of new samples. As illustrated in in some real-world scenarios of RE. Continual Figure 1, the continual few-shot RE paradigm expects RE (Wang et al., 2019; Han et al., 2020; Wu et al., the model to continuously learn new relations 2021) was proposed as a new paradigm to solve through abundant training data only for the first this situation, which applies the idea of continual task, but through sparse training data for all subsequent learning (Parisi et al., 2019) to the field of RE. tasks. Thus, the model needs to identify Compared with conventional RE, continual RE the growing relations well with few labeled data is more challenging. It requires the model to learn for them while retaining the knowledge on old relations emerging relations while maintaining a stable and without re-training from scratch. As relations accurate classification of old relations, i.e., the socalled grow, the confusion about relation representations catastrophic forgetting problem (Thrun and leads to catastrophic forgetting.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found