Nay, John
Deception in Reinforced Autonomous Agents: The Unconventional Rabbit Hat Trick in Legislation
Dogra, Atharvan, Deshpande, Ameet, Nay, John, Rajpurohit, Tanmay, Kalyan, Ashwin, Ravindran, Balaraman
Recent developments in large language models (LLMs), while offering a powerful foundation for developing natural language agents, raise safety concerns about them and the autonomous agents built upon them. Deception is one potential capability of AI agents of particular concern, which we refer to as an act or statement that misleads, hides the truth, or promotes a belief that is not true in its entirety or in part. We move away from the conventional understanding of deception through straight-out lying, making objective selfish decisions, or giving false information, as seen in previous AI safety research. We target a specific category of deception achieved through obfuscation and equivocation. We broadly explain the two types of deception by analogizing them with the rabbit-out-of-hat magic trick, where (i) the rabbit either comes out of a hidden trap door or (ii) (our focus) the audience is completely distracted to see the magician bring out the rabbit right in front of them using sleight of hand or misdirection. Our novel testbed framework displays intrinsic deception capabilities of LLM agents in a goal-driven environment when directed to be deceptive in their natural language generations in a two-agent adversarial dialogue system built upon the legislative task of "lobbying" for a bill. Along the lines of a goal-driven environment, we show developing deceptive capacity through a reinforcement learning setup, building it around the theories of language philosophy and cognitive psychology. We find that the lobbyist agent increases its deceptive capabilities by ~ 40% (relative) through subsequent reinforcement trials of adversarial interactions, and our deception detection mechanism shows a detection capability of up to 92%. Our results highlight potential issues in agent-human interaction, with agents potentially manipulating humans towards its programmed end-goal.
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
Guha, Neel, Nyarko, Julian, Ho, Daniel E., Ré, Christopher, Chilton, Adam, Narayana, Aditya, Chohlas-Wood, Alex, Peters, Austin, Waldon, Brandon, Rockmore, Daniel N., Zambrano, Diego, Talisman, Dmitry, Hoque, Enam, Surani, Faiz, Fagan, Frank, Sarfaty, Galit, Dickinson, Gregory M., Porat, Haggai, Hegland, Jason, Wu, Jessica, Nudell, Joe, Niklaus, Joel, Nay, John, Choi, Jonathan H., Tobia, Kevin, Hagan, Margaret, Ma, Megan, Livermore, Michael, Rasumov-Rahe, Nikon, Holzenberger, Nils, Kolt, Noam, Henderson, Peter, Rehaag, Sean, Goel, Sharad, Gao, Shang, Williams, Spencer, Gandhi, Sunny, Zur, Tom, Iyer, Varun, Li, Zehua
The advent of large language models (LLMs) and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively constructed legal reasoning benchmark consisting of 162 tasks covering six different types of legal reasoning. LegalBench was built through an interdisciplinary process, in which we collected tasks designed and hand-crafted by legal professionals. Because these subject matter experts took a leading role in construction, tasks either measure legal reasoning capabilities that are practically useful, or measure reasoning skills that lawyers find interesting. To enable cross-disciplinary conversations about LLMs in the law, we additionally show how popular legal frameworks for describing legal reasoning -- which distinguish between its many forms -- correspond to LegalBench tasks, thus giving lawyers and LLM developers a common vocabulary. This paper describes LegalBench, presents an empirical evaluation of 20 open-source and commercial LLMs, and illustrates the types of research explorations LegalBench enables.