Enhancing Multi-Agent Consensus through Third-Party LLM Integration: Analyzing Uncertainty and Mitigating Hallucinations in Large Language Models
–arXiv.org Artificial Intelligence
Abstract--Large Language Models (LLMs) still face challenges when dealing with complex reasoning tasks, often resulting in hallucinations, which limit the practical application of LLMs. To alleviate this issue, this paper proposes a new method that integrates different LLMs to expand the knowledge boundary, reduce dependence on a single model, and promote in-depth debate among agents. The main contributions include: 1) Introducing third-party LLMs to adjust the attention weights of agents through uncertainty estimation and confidence analysis, optimizing consensus formation in multi-agent systems; 2) Experiments on arithmetic datasets have validated the effectiveness of the method, surpassing traditional multi-agent baselines. This research provides a new perspective for large models to alleviate hallucination phenomena when dealing with complex tasks. In these systems, multiple agents articulate their arguments, while a neutral moderator oversees the debate process to facilitate the attainment of a final resolution Therefore, this paper raises a question: Can a third-party [2].By employing multiple instances of language models and model be introduced to complete tasks through the collaboration engaging in several rounds of proposal and debate regarding of multiple Large Language Models?
arXiv.org Artificial Intelligence
Nov-25-2024
- Country:
- Asia > China (0.15)
- North America > United States (0.14)
- Genre:
- Research Report (1.00)
- Technology: