Goto

Collaborating Authors

 counsel


SAMVAD: A Multi-Agent System for Simulating Judicial Deliberation Dynamics in India

Devadiga, Prathamesh, Shetty, Omkaar Jayadev, Agarwal, Pooja

arXiv.org Artificial Intelligence

Understanding the complexities of judicial deliberation is crucial for assessing the efficacy and fairness of a justice system. However, empirical studies of judicial panels are constrained by significant ethical and practical barriers. This paper introduces SAMVAD, an innovative Multi-Agent System (MAS) designed to simulate the deliberation process within the framework of the Indian justice system. Our system comprises agents representing key judicial roles: a Judge, a Prosecution Counsel, a Defense Counsel, and multiple Adjudicators (simulating a judicial bench), all powered by large language models (LLMs). A primary contribution of this work is the integration of Retrieval-Augmented Generation (RAG), grounded in a domain-specific knowledge base of landmark Indian legal documents, including the Indian Penal Code and the Constitution of India. This RAG functionality enables the Judge and Counsel agents to generate legally sound instructions and arguments, complete with source citations, thereby enhancing both the fidelity and transparency of the simulation. The Adjudicator agents engage in iterative deliberation rounds, processing case facts, legal instructions, and arguments to reach a consensus-based verdict. We detail the system architecture, agent communication protocols, the RAG pipeline, the simulation workflow, and a comprehensive evaluation plan designed to assess performance, deliberation quality, and outcome consistency. This work provides a configurable and explainable MAS platform for exploring legal reasoning and group decision-making dynamics in judicial simulations, specifically tailored to the Indian legal context and augmented with verifiable legal grounding via RAG.


US lawyer sanctioned after caught using ChatGPT for court brief

The Guardian

The Utah court of appeals has sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case. Earlier this week, the Utah court of appeals made the decision to sanction Richard Bednar over claims that he filed a brief which included false citations. According to court documents reviewed by ABC4, Bednar and Douglas Durbano, another Utah-based lawyer who was serving as the petitioner's counsel, filed a "timely petition for interlocutory appeal". Upon reviewing the brief which was written by a law clerk, the respondent's counsel found several false citations of cases. "It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT and references to cases that are wholly unrelated to the referenced subject matter," the respondent's counsel said in documents reviewed by ABC4.


Lawsuit says Mark Zuckerberg approved Meta's use of pirated materials to train Llama AI

Engadget

As TechCrunch reports, the plaintiffs of the Kadrey v. Meta case submitted court documents talking about the company's use of of the LibGen dataset for AI training. LibGen is generally described as a "shadow library" that provides file-sharing access to academic and general-interest books, journals, images and other materials. The counsel for the plaintiffs, which include writers Sarah Silverman and Ta-Nehisi Coates, accused Zuckerberg of approving the use of LibGen for training despite concerns raised by company executives and employees who described it as a "dataset [they] know to be pirated." In addition, the counsel mentioned that Meta admitted to torrenting LibGen materials, even though its engineers felt uneasy about sharing them "from a [Meta-owned] corporate laptop." They accused the companies of using pirated materials from shadow libraries to train their AI models.


What Scarlett Johansson v. OpenAI Could Look Like in Court

WIRED

In a product demo last week, OpenAI showcased a synthetic but expressive voice for ChatGPT called "Sky" that reminded many viewers of the flirty AI girlfriend Samantha played by Scarlett Johansson in the 2013 film Her. One of those viewers was Johansson herself, who promptly hired legal counsel and sent letters to OpenAI demanding an explanation, according to a statement released later. In response, the company on Sunday halted use of Sky and published a blog post insisting that it "is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice." Johansson's statement, released Monday, said she was "shocked, angered, and in disbelief" by OpenAI's demo using a voice she called "so eerily similar to mine that my closest friends and news outlets could not tell the difference." Johansson revealed that she had turned down a request last year from the company's CEO, Sam Altman, to voice ChatGPT and that he had reached out again two days before last week's demo in an attempt to change her mind.


Scarlett Johansson 'Angered' By ChatGPT Voice That Sounded 'Eerily' Like Her

TIME - Tech

Scarlett Johansson said Monday that she was "shocked, angered and in disbelief" when she heard that OpenAI used a voice "eerily similar" to hers for its new ChatGPT 4.0 chatbot, even after she had declined to provide her voice. Earlier on Monday, OpenAI announced on X that it would pause the AI voice, known as "Sky," while it addresses "questions about how we chose the voices in ChatGPT." The company said in a blog post that the "Sky" voice was "not an imitation" of Johansson's voice, but that it was recorded by a different professional actor, whose identity the company would not reveal to protect her privacy. But Johansson said in a statement to NPR on Monday that OpenAI's Chief Executive Officer Sam Altman had asked her in September to voice the ChatGPT 4.0 system because he thought her "voice would be comforting to people." She declined, but nine months later, her friends, family and the public noticed how the "Sky" voice resembled hers.


A Computational Analysis of Oral Argument in the Supreme Court

Dickinson, Gregory M.

arXiv.org Artificial Intelligence

As the most public component of the Supreme Court's decision-making process, oral argument receives an out-sized share of attention in the popular media. Despite its prominence, however, the basic function and operation of oral argument as an institution remains poorly understood, as political scientists and legal scholars continue to debate even the most fundamental questions about its role. Past study of oral argument has tended to focus on discrete, quantifiable attributes of oral argument, such as the number of questions asked to each advocate, the party of the Justices' appointing president, or the ideological implications of the case on appeal. Such studies allow broad generalizations about oral argument and judicial decision making: Justices tend to vote in accordance with their ideological preferences, and they tend to ask more questions when they are skeptical of a party's position. But they tell us little about the actual goings on at oral argument -- the running dialog between Justice and advocate that is the heart of the institution. This Article fills that void, using machine learning techniques to, for the first time, construct predictive models of judicial decision making based not on oral argument's superficial features or on factors external to oral argument, such as where the case falls on a liberal-conservative spectrum, but on the actual content of the oral argument itself -- the Justices' questions to each side. The resultant models offer an important new window into aspects of oral argument that have long resisted empirical study, including the Justices' individual questioning styles, how each expresses skepticism, and which of the Justices' questions are most central to oral argument dialog.


AI For Women In Law: Answering The Call For AI-Savvy Legal Leaders. - Conventus Law

#artificialintelligence

Unstructured data like emails, instant messages, and image files now make up 80 to 90 percent of corporate data – and it's growing three times faster than structured data, according to Gartner. To keep up with this onslaught of hard-to-manage data, companies are expected to invest $190 billion in AI by 2025. To help our community keep pace with advances in AI, Relativity hosted our first AI Bootcamp for Women in Law last week in Washington, DC. This invite-only event combined AI-focused sessions with networking events to provide the essential AI knowledge needed to be a legal innovation leader. Attendees received a working knowledge of AI, an AI Bootcamp certificate of completion, CLE credit, and RCE credits, and the 30 women in attendance left feeling empowered and inspired.


Data Science Meets Law

Communications of the ACM

Shlomi Hod (shlomi@bu.edu) is a computer science Ph.D. student at Boston University, USA. Karni Chagal-Feferkorn (karni111@gmail.com) is a Postdoctoral Fellow in AI and Regulation at the Faculty of Law, Common Law Section, University of Ottawa, Canada. Niva Elkin-Koren (elkiniva@tauex.tau.ac.il) is a Professor of Law at Tel Aviv University, Faculty of Law, Israel. Avigdor Gal (avigal@ie.technion.ac.il) is the Benjamin and Florence Free Chaired Professor of Data Science at Technion--Israel Institute of Technology, Israel.


13 Must Read AI Research papers in 2021

#artificialintelligence

As we approach the end of 2021, we wanted to share 13 of the most important AI papers of the year, as selected by the experts in the RE•WORK community who will be speaking at the Deep Learning Hybrid Summit in San Francisco in February 2022. These papers are free to access and cover a range of topics from computer vision to the way deep learning is helping to uncover the mysteries of space. You can join us and connect with our experts discussing trends and industry updates in the Deep Learning Hybrid Summit. Get your ticket here to join us in-person or virtually. Before joining Salesforce as Senior AI Product Manager, Vera Serdiukova built edge computing machine learning capabilities as a part of LG's Silicon Valley Lab Advanced AI Team.


Top Ten Issues on Liability and Regulation of Artificial Intelligence (AI) Systems

#artificialintelligence

Key Takeaways: - New Artificial Intelligence (AI) technology is being integrated into all industries. I have written a few articles regarding the liability of autonomous systems under the United Arab Emirates' (UAE) law, regarding the liability of autonomous systems under the UAE's Civil Code, available remedies, comparing to other regimes, and recommendations for law, policy and ethics. I focused mainly on the liability and regulation of autonomous or Artificial Intelligence (AI) systems under the laws of the UAE, but I also compared the UAE's legal system to other regimes, including the United Kingdom (UK) and the European Union (EU). I concluded that generally speaking, when it comes to AI, the issues are similar across the globe. In the near future, every single one of us will be dealing in some shape or form with an autonomous system or an AI-powered system.