Tseng, Michael
A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains
Jacovi, Alon, Bitton, Yonatan, Bohnet, Bernd, Herzig, Jonathan, Honovich, Or, Tseng, Michael, Collins, Michael, Aharoni, Roee, Geva, Mor
Prompting language models to provide step-by-step answers (e.g., "Chain-of-Thought") is the prominent approach for complex reasoning tasks, where more accurate reasoning chains typically improve downstream task performance. Recent literature discusses automatic methods to verify reasoning steps to evaluate and improve their correctness. However, no fine-grained step-level datasets are available to enable thorough evaluation of such verification methods, hindering progress in this direction. We introduce Reveal: Reasoning Verification Evaluation, a new dataset to benchmark automatic verifiers of complex Chain-of-Thought reasoning in open-domain question answering settings. Reveal includes comprehensive labels for the relevance, attribution to evidence passages, and logical correctness of each reasoning step in a language model's answer, across a wide variety of datasets and state-of-the-art language models.
Linguistic Wisdom from the Crowd
Chang, Nancy (Google) | Lee-Goldman, Russell (Google) | Tseng, Michael (Google)
Crowdsourcing for linguistic data typically aims to replicate expert annotations using simplified tasks. But an alternative goal — one that is especially relevant for research in the domains of language meaning and use — is to tap into people's rich experience as everyday users of language. Research in these areas has the potential to tell us a great deal about how language works, but designing annotation frameworks for crowdsourcing of this kind poses special challenges. In this paper we define and exemplify two approaches to linguistic data collection corresponding to these differing goals (model-driven and user-driven) and discuss some hybrid cases in which they overlap. We also describe some design principles and resolution techniques helpful for eliciting linguistic wisdom from the crowd.