TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Semantic Tasks
Moskvoretskii, Viktor, Neminova, Ekaterina, Lobanova, Alina, Panchenko, Alexander, Nikishina, Irina
–arXiv.org Artificial Intelligence
In this paper, we explore the capabilities of LLMs in capturing lexical-semantic knowledge from WordNet on the example of the LLaMA-2-7b model and test it on multiple lexical semantic tasks. As the outcome of our experiments, we present TaxoLLaMA, the everything-in-one model, lightweight due to 4-bit quantization and LoRA. It achieves 11 SotA results, 4 top-2 results out of 16 tasks for the Taxonomy Enrichment, Hypernym Discovery, Taxonomy Construction, and Lexical Entailment tasks. Moreover, it demonstrates very strong zero-shot performance on Lexical Entailment and Taxonomy Construction with no fine-tuning. We also explore its hidden multilingual and domain adaptation capabilities with a little tuning or few-shot learning. All datasets, code, and model are available online at https://github.com/VityaVitalich/TaxoLLaMA
arXiv.org Artificial Intelligence
Jun-17-2024
- Country:
- Asia
- China > Hong Kong (0.04)
- Indonesia > Bali (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Europe > Italy
- North America
- Canada > Ontario
- Toronto (0.04)
- Dominican Republic (0.04)
- United States
- California > San Diego County
- San Diego (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New Mexico > Santa Fe County
- Santa Fe (0.04)
- New York > New York County
- New York City (0.04)
- Washington > King County
- Seattle (0.04)
- California > San Diego County
- Canada > Ontario
- Oceania > Australia
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Technology: