Can Structured Data Reduce Epistemic Uncertainty?
S, Shriram M, S, Sushmitha, S, Gayathri K, A, Shahina
–arXiv.org Artificial Intelligence
One of the main issues with the current In the current era of Large Language Models (LLMs), with retrieval approaches using Retrieval-Augmented Generation an abundance of data, there is always a tricky question to is hallucination, where the model gives out irrelevant, be addressed: Is providing an abundance of data enough to incorrect, and unreal responses. By incorporating subsumptions solve complex tasks? The majority of modern-day models in the prompt, we ensure hallucination is minimized are fundamentally probabilistic, which though highly powerful and the response of the Language Model is more contextually in its way, gives the model only an uncertain output and factually intact. Section 4 presents key insights that cannot be reasoned out. This uncertainty is of 2 from our experimentation with ontologies in the medical domain, types, epistemic (EU) and aleatoric (AU), where the former demonstrating how our methodology could be used is also called reducible uncertainty, caused due to the lack of for quicker training and reducing hallucinations in LLMs.
arXiv.org Artificial Intelligence
Oct-14-2024
- Country:
- North America > United States > New York > New York County > New York City (0.04)
- Genre:
- Research Report (0.82)
- Industry:
- Health & Medicine (1.00)
- Technology: