Beyond Majority Voting: LLM Aggregation by Leveraging Higher-Order Information

Ai, Rui, Pan, Yuqi, Simchi-Levi, David, Tambe, Milind, Xu, Haifeng

arXiv.org Artificial Intelligence 

The aggregation of responses from multiple large language models has been widely used in practice. For example, a popular application is to improve reasoning via multi-agent LLM debate [Khan et al., 2024, Subramaniam et al., 2025, Choi et al., 2025] and LLM council [Zhao et al., 2024]. Previous works thus far have mostly employed the simple majority voting (MV) rule as a natural first instinct to aggregate different LLMs' responses into a single answer. Intuitively, MV can be viewed as a zero-order aggregation method that only depends on the observed answers and fails to account for heterogeneity and correlation among models, which are often captured by higher-order information such as LLMs' expected accuracies (first-order information) and answer correlation (second-order information). This thus raises the following natural question: is it possible to leverage such higher-order information to develop better methods for aggregating LLMs' responses?

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found