Leveraging the Power of Large Language Models in Entity Linking via Adaptive Routing and Targeted Reasoning
Li, Yajie, Galimov, Albert, Ganapaneni, Mitra Datta, Thejaswi, Pujitha, Meng, De, Kumar, Priyanshu, Potdar, Saloni
–arXiv.org Artificial Intelligence
Entity Linking (EL) has traditionally relied on large annotated datasets and extensive model fine-tuning. While recent few-shot methods leverage large language models (LLMs) through prompting to reduce training requirements, they often suffer from inefficiencies due to expensive LLM-based reasoning. ARTER (Adaptive Routing and Targeted Entity Reasoning) presents a structured pipeline that achieves high performance without deep fine-tuning by strategically combining candidate generation, context-based scoring, adaptive routing, and selective reasoning. ARTER computes a small set of complementary signals(both embedding and LLM-based) over the retrieved candidates to categorize contextual mentions into easy and hard cases. The cases are then handled by a low-computational entity linker (e.g. ReFinED) and more expensive targeted LLM-based reasoning respectively. On standard benchmarks, ARTER outperforms ReFinED by up to +4.47%, with an average gain of +2.53% on 5 out of 6 datasets, and performs comparably to pipelines using LLM-based reasoning for all mentions, while being as twice as efficient in terms of the number of LLM tokens.
arXiv.org Artificial Intelligence
Nov-20-2025
- Country:
- Europe
- Czechia > Prague (0.04)
- France > Île-de-France
- Ireland (0.04)
- United Kingdom > Scotland
- City of Edinburgh > Edinburgh (0.04)
- North America
- Canada (0.04)
- United States
- District of Columbia > Washington (0.04)
- Florida > Miami-Dade County
- Miami (0.04)
- Massachusetts > Hampshire County
- Amherst (0.04)
- Oregon > Multnomah County
- Portland (0.04)
- Texas > Lamar County
- Paris (0.04)
- Washington > King County
- Seattle (0.04)
- Oceania > Australia (0.04)
- Europe
- Genre:
- Research Report (0.50)
- Technology: