(When) Is Truth-telling Favored in AI Debate?
–arXiv.org Artificial Intelligence
For some problems, humans may not be able to accurately judge the goodness of AIproposed solutions. Irving, Christiano, and Amodei (2018) propose that in such cases, we may use a debate between two AI systems to amplify the problem-solving capabilities of a human judge. We introduce a mathematical framework that can model debates of this type and propose that the quality of debate designs should be measured by the accuracy of the most persuasive answer. We describe a simple instance of the debate framework called feature debate and analyze the degree to which such debates track the truth. We argue that despite being ver y simple, feature debates nonetheless capture many aspects o f practical debates such as the incentives to confuse the judg e or stall to prevent losing. We then outline how these models should be generalized to analyze a wider range of debate phenomena.
arXiv.org Artificial Intelligence
Nov-11-2019
- Country:
- Europe > United Kingdom
- England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- England
- North America
- Canada > Rocky Mountains (0.04)
- United States > Rocky Mountains (0.04)
- Europe > United Kingdom
- Genre:
- Research Report (0.50)
- Industry:
- Leisure & Entertainment > Games (0.93)