Beyond Majority Voting: LLM Aggregation by Leveraging Higher-Order Information
Ai, Rui, Pan, Yuqi, Simchi-Levi, David, Tambe, Milind, Xu, Haifeng
–arXiv.org Artificial Intelligence
The aggregation of responses from multiple large language models has been widely used in practice. For example, a popular application is to improve reasoning via multi-agent LLM debate [Khan et al., 2024, Subramaniam et al., 2025, Choi et al., 2025] and LLM council [Zhao et al., 2024]. Previous works thus far have mostly employed the simple majority voting (MV) rule as a natural first instinct to aggregate different LLMs' responses into a single answer. Intuitively, MV can be viewed as a zero-order aggregation method that only depends on the observed answers and fails to account for heterogeneity and correlation among models, which are often captured by higher-order information such as LLMs' expected accuracies (first-order information) and answer correlation (second-order information). This thus raises the following natural question: is it possible to leverage such higher-order information to develop better methods for aggregating LLMs' responses?
arXiv.org Artificial Intelligence
Oct-3-2025
- Country:
- Asia
- India (0.04)
- Middle East > Jordan (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- Illinois > Cook County
- Chicago (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Illinois > Cook County
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine
- Consumer Health (0.46)
- Therapeutic Area (0.46)
- Health & Medicine
- Technology: