Boosting the Potential of Large Language Models with an Intelligent Information Assistant Yujia Zhou
–Neural Information Processing Systems
The emergence of Large Language Models (LLMs) has significantly advanced natural language processing, but these models often generate factually incorrect information, known as "hallucination". Initial retrieval-augmented generation (RAG) methods like the "Retrieve-Read" framework was inadequate for complex reasoning tasks. Subsequent prompt-based RAG strategies and Supervised Fine-Tuning (SFT) methods improved performance but required frequent retraining and risked altering foundational LLM capabilities.
Neural Information Processing Systems
May-28-2025, 20:59:00 GMT
- Country:
- North America > United States (1.00)
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (0.93)
- Research Report
- Industry:
- Technology: