Ingest-And-Ground: Dispelling Hallucinations from Continually-Pretrained LLMs with RAG
Fang, Chenhao, Larson, Derek, Zhu, Shitong, Zeng, Sophie, Summer, Wendy, Peng, Yanqing, Hulovatyy, Yuriy, Rao, Rajeev, Forgues, Gabriel, Pudota, Arya, Goncalves, Alex, Robert, Hervé
–arXiv.org Artificial Intelligence
This paper presents new methods that have the potential to improve privacy process efficiency with LLM and RAG. To reduce hallucination, we continually pre-train the base LLM model with a privacy-specific knowledge base and then augment it with a semantic RAG layer. Our evaluations demonstrate that this approach enhances the model performance (as much as doubled metrics compared to out-of-box LLM) in handling privacy-related queries, by grounding responses with factual information which reduces inaccuracies.
arXiv.org Artificial Intelligence
Oct-11-2024
- Country:
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Genre:
- Research Report (0.65)
- Industry:
- Information Technology > Security & Privacy (0.99)
- Technology: