Instruct and Extract: Instruction Tuning for On-Demand Information Extraction
Jiao, Yizhu, Zhong, Ming, Li, Sha, Zhao, Ruining, Ouyang, Siru, Ji, Heng, Han, Jiawei
–arXiv.org Artificial Intelligence
Large language models with instruction-following capabilities open the door to a wider group of users. However, when it comes to information extraction - a classic task in natural language processing - most task-specific systems cannot align well with long-tail ad hoc extraction use cases for non-expert users. To address this, we propose a novel paradigm, termed On-Demand Information Extraction, to fulfill the personalized demands of real-world users. Our task aims to follow the instructions to extract the desired content from the associated text and present it in a structured tabular format. The table headers can either be user-specified or inferred contextually by the model. To facilitate research in this emerging area, we present a benchmark named InstructIE, inclusive of both automatically generated training data, as well as the human-annotated test set. Building on InstructIE, we further develop an On-Demand Information Extractor, ODIE. Comprehensive evaluations on our benchmark reveal that ODIE substantially outperforms the existing open-source models of similar size. Our code and dataset are released on https://github.com/yzjiao/On-Demand-IE.
arXiv.org Artificial Intelligence
Oct-24-2023
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe (1.00)
- North America > United States
- California (0.14)
- Illinois (0.14)
- Ohio (0.14)
- Texas (0.14)
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Industry:
- Education (0.67)
- Health & Medicine > Consumer Health (0.67)
- Information Technology > Security & Privacy (1.00)
- Technology: