llm-da
- Asia > Japan (0.04)
- Oceania > Australia (0.04)
- Asia > South Korea (0.04)
- (6 more...)
- Overview (1.00)
- Research Report > Experimental Study (0.93)
- Government (0.46)
- Education (0.46)
- Asia > Japan (0.04)
- Oceania > Australia (0.04)
- Asia > South Korea (0.04)
- (6 more...)
- Overview (1.00)
- Research Report > Experimental Study (0.93)
- Government (0.46)
- Education (0.46)
Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning
Wang, Jiapu, Sun, Kai, Luo, Linhao, Wei, Wei, Hu, Yongli, Liew, Alan Wee-Chung, Pan, Shirui, Yin, Baocai
Temporal Knowledge Graph Reasoning (TKGR) is the process of utilizing temporal information to capture complex relations within a Temporal Knowledge Graph (TKG) to infer new knowledge. Conventional methods in TKGR typically depend on deep learning algorithms or temporal logical rules. However, deep learning-based TKGRs often lack interpretability, whereas rule-based TKGRs struggle to effectively learn temporal rules that capture temporal patterns. Recently, Large Language Models (LLMs) have demonstrated extensive knowledge and remarkable proficiency in temporal reasoning. Consequently, the employment of LLMs for Temporal Knowledge Graph Reasoning (TKGR) has sparked increasing interest among researchers. Nonetheless, LLMs are known to function as black boxes, making it challenging to comprehend their reasoning process. Additionally, due to the resource-intensive nature of fine-tuning, promptly updating LLMs to integrate evolving knowledge within TKGs for reasoning is impractical. To address these challenges, in this paper, we propose a Large Language Models-guided Dynamic Adaptation (LLM-DA) method for reasoning on TKGs. Specifically, LLM-DA harnesses the capabilities of LLMs to analyze historical data and extract temporal logical rules. These rules unveil temporal patterns and facilitate interpretable reasoning. To account for the evolving nature of TKGs, a dynamic adaptation strategy is proposed to update the LLM-generated rules with the latest events. This ensures that the extracted rules always incorporate the most recent knowledge and better generalize to the predictions on future events. Experimental results show that without the need of fine-tuning, LLM-DA significantly improves the accuracy of reasoning over several common datasets, providing a robust framework for TKGR tasks.
- Asia > Japan (0.04)
- Oceania > Australia (0.04)
- Asia > South Korea (0.04)
- (6 more...)
- Overview (0.93)
- Research Report > New Finding (0.34)
LLM-DA: Data Augmentation via Large Language Models for Few-Shot Named Entity Recognition
Ye, Junjie, Xu, Nuo, Wang, Yikun, Zhou, Jie, Zhang, Qi, Gui, Tao, Huang, Xuanjing
Despite the impressive capabilities of large language models (LLMs), their performance on information extraction tasks is still not entirely satisfactory. However, their remarkable rewriting capabilities and extensive world knowledge offer valuable insights to improve these tasks. In this paper, we propose $LLM-DA$, a novel data augmentation technique based on LLMs for the few-shot NER task. To overcome the limitations of existing data augmentation methods that compromise semantic integrity and address the uncertainty inherent in LLM-generated text, we leverage the distinctive characteristics of the NER task by augmenting the original data at both the contextual and entity levels. Our approach involves employing 14 contextual rewriting strategies, designing entity replacements of the same type, and incorporating noise injection to enhance robustness. Extensive experiments demonstrate the effectiveness of our approach in enhancing NER model performance with limited data. Furthermore, additional analyses provide further evidence supporting the assertion that the quality of the data we generate surpasses that of other existing methods.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (11 more...)