Goto

Collaborating Authors

 lawyer


Thousands of Epstein documents taken down after victims identified

BBC News

The US Department of Justice (DOJ) has removed thousands of documents related to Jeffrey Epstein from its website after victims said their identities had been compromised. Lawyers for Epstein's victims said flawed redactions in the files released on Friday had turned upside down the lives of nearly 100 survivors. Email addresses and nude photos in which the names and faces of potential victims could be identified were included in the release. Survivors issued a statement calling the disclosure outrageous and said they should not be named, scrutinized and retraumatized. The DOJ said it had taken down all the flagged files and that mistakes were due to technical or human error.


Why Millennials Love Prenups

The New Yorker

Long the province of the ultra-wealthy, prenuptial agreements are being embraced by young people--including many who don't have all that much to divvy up. More than forty per cent of millennials and Gen Z-ers claim to have signed a prenup. Andrea Zevallos declared 2016 her "year of dating." She was twenty-seven, working at Universal Studios Hollywood, the theme park, and determined to find love. She calculated it would take three dates a week. By December, she was losing hope. "It was exhausting," she said. Then, while scrolling OkCupid, she noticed a "cute guy" with a "Hamilton" reference in his handle. His name was Alex Switzky, and like her he was a musical-theatre enthusiast and aspiring screenwriter. He was different from the other men she'd met. On their second date, he started planning a third. Zevallos "was used to L.A. guys cagey about any sort of calendar." One day, Switzky called her. Accustomed to texts, she assumed that he was about to break up with her. "The most millennial response," she recalled, laughing.


AI might not be coming for lawyers' jobs anytime soon

MIT Technology Review

AI might not be coming for lawyers' jobs anytime soon Generative AI might have aced the bar exam, but an LLM still can't think like a lawyer. When the generative AI boom took off in 2022, Rudi Miller and her law school classmates were suddenly gripped with anxiety. "Before graduating, there was discussion about what the job market would look like for us if AI became adopted," she recalls. So when it came time to choose a speciality, Miller--now a junior associate at the law firm Orrick--decided to become a litigator, the kind of lawyer who represents clients in court. She hoped the courtroom would be the last human stage. "Judges haven't allowed ChatGPT-enabled robots to argue in court yet," she says.


Times Investigation: Ex-Trump DOJ lawyers say 'fraudulent' UC antisemitism probes led them to quit

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Times Investigation: Ex-Trump DOJ lawyers say'fraudulent' UC antisemitism probes led them to quit This is read by an automated voice. Please report any issues or inconsistencies here . Nine former DOJ attorneys investigating UC antisemitism told The Times they felt pressured to conclude that campuses had violated the civil rights of Jewish students and staff. The attorneys resigned during the course of their UC assignments, some concerned that they were being asked to violate ethical standards. UC says it is open to talks with the Trump administration to protect $17.5 billion in federal funding.


Assessing the Reliability of Large Language Models in the Bengali Legal Context: A Comparative Evaluation Using LLM-as-Judge and Legal Experts

Aftahee, Sabik, Farhad, A. F. M., Mallik, Arpita, Dhar, Ratnajit, Karim, Jawadul, Noor, Nahiyan Bin, Solaiman, Ishmam Ahmed

arXiv.org Artificial Intelligence

Accessing legal help in Bangladesh is hard. People face high fees, complex legal language, a shortage of lawyers, and millions of unresolved court cases. Generative AI models like OpenAI GPT-4.1 Mini, Gemini 2.0 Flash, Meta Llama 3 70B, and DeepSeek R1 could potentially democratize legal assistance by providing quick and affordable legal advice. In this study, we collected 250 authentic legal questions from the Facebook group "Know Your Rights," where verified legal experts regularly provide authoritative answers. These questions were subsequently submitted to four four advanced AI models and responses were generated using a consistent, standardized prompt. A comprehensive dual evaluation framework was employed, in which a state-of-the-art LLM model served as a judge, assessing each AI-generated response across four critical dimensions: factual accuracy, legal appropriateness, completeness, and clarity. Following this, the same set of questions was evaluated by three licensed Bangladeshi legal professionals according to the same criteria. In addition, automated evaluation metrics, including BLEU scores, were applied to assess response similarity. Our findings reveal a complex landscape where AI models frequently generate high-quality, well-structured legal responses but also produce dangerous misinformation, including fabricated case citations, incorrect legal procedures, and potentially harmful advice. These results underscore the critical need for rigorous expert validation and comprehensive safeguards before AI systems can be safely deployed for legal consultation in Bangladesh.


The Download: carbon removal's future, and measuring pain using an app

MIT Technology Review

Plus: Meta's lawyers advised staff to remove parts of their research After years of growth that spawned hundreds of startups, the nascent carbon removal sector appears to be facing a reckoning. Running Tide, a promising aquaculture company, shut down its operations last summer, and a handful of other companies have shuttered, downsized, or pivoted in recent months as well. And the collective industry hasn't made a whole lot more progress toward Running Tide's ambitious plans to sequester a billion tons of carbon dioxide by this year. The hype phase is over and the sector is sliding into the turbulent business trough that follows, experts warn. And the open question is: If the carbon removal sector is heading into a painful if inevitable clearing-out cycle, where will it go from there? This story is part of MIT Technology Review's What's Next series, which looks across industries, trends, and technologies to give you a first look at the future.


LeCoDe: A Benchmark Dataset for Interactive Legal Consultation Dialogue Evaluation

Yuan, Weikang, Song, Kaisong, Jiang, Zhuoren, Cao, Junjie, Zhang, Yujie, Lin, Jun, Kuang, Kun, Zhang, Ji, Liu, Xiaozhong

arXiv.org Artificial Intelligence

Legal consultation is essential for safeguarding individual rights and ensuring access to justice, yet remains costly and inaccessible to many individuals due to the shortage of professionals. While recent advances in Large Language Models (LLMs) offer a promising path toward scalable, low-cost legal assistance, current systems fall short in handling the interactive and knowledge-intensive nature of real-world consultations. To address these challenges, we introduce LeCoDe, a real-world multi-turn benchmark dataset comprising 3,696 legal consultation dialogues with 110,008 dialogue turns, designed to evaluate and improve LLMs' legal consultation capability. With LeCoDe, we innovatively collect live-streamed consultations from short-video platforms, providing authentic multi-turn legal consultation dialogues. The rigorous annotation by legal experts further enhances the dataset with professional insights and expertise. Furthermore, we propose a comprehensive evaluation framework that assesses LLMs' consultation capabilities in terms of (1) clarification capability and (2) professional advice quality. This unified framework incorporates 12 metrics across two dimensions. Through extensive experiments on various general and domain-specific LLMs, our results reveal significant challenges in this task, with even state-of-the-art models like GPT-4 achieving only 39.8% recall for clarification and 59% overall score for advice quality, highlighting the complexity of professional consultation scenarios. Based on these findings, we further explore several strategies to enhance LLMs' legal consultation abilities. Our benchmark contributes to advancing research in legal domain dialogue systems, particularly in simulating more real-world user-expert interactions.


The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice

Yuvaraj, Joshua

arXiv.org Artificial Intelligence

It is often claimed that machine learning-based generative AI products will drastically streamline and reduce the cost of legal practice. This enthusiasm assumes lawyers can effectively manage AI's risks. Cases in Australia and elsewhere in which lawyers have been reprimanded for submitting inaccurate AI-generated content to courts suggest this paradigm must be revisited. This paper argues that a new paradigm is needed to evaluate AI use in practice, given (a) AI's disconnection from reality and its lack of transparency, and (b) lawyers' paramount duties like honesty, integrity, and not to mislead the court. It presents an alternative model of AI use in practice that more holistically reflects these features (the verification-value paradox). That paradox suggests increases in efficiency from AI use in legal practice will be met by a correspondingly greater imperative to manually verify any outputs of that use, rendering the net value of AI use often negligible to lawyers. The paper then sets out the paradox's implications for legal practice and legal education, including for AI use but also the values that the paradox suggests should undergird legal practice: fidelity to the truth and civic responsibility.


OpenAI Removed Safeguards Before Teen's Suicide, Amended Lawsuit Claims

TIME - Tech

OpenAI Removed Safeguards Before Teen's Suicide, Amended Lawsuit Claims OpenAI relaxed safeguards that would have prevented ChatGPT from engaging in conversations about self-harm in the months leading up to the suicide of Adam Raine, an amended complaint filed by the family in the San Francisco County Superior Court on Wednesday alleges. The amendment changes the theory of the case from reckless indifference to intentional misconduct, according to the family's lawyers, which could raise the damages awarded to the family. The Raine family's lawyers will have to prove that OpenAI was aware of the risks posed by ChatGPT and disregarded them. The family has asked for a jury trial. In an interview with TIME, Jay Edelson, one of the Raine family's lawyers, says OpenAI relaxed safeguards in an "intentional decision" to "prioritize engagement."


Inside Donald Trump's Attack on Immigration Court

The New Yorker

Judges describe a campaign of firings and interference which threatens the system's independence. On a Thursday morning last month, Patrick O'Brien, a federal immigration judge, walked into his courtroom in downtown San Francisco. He was scheduled for a master-calendar hearing, a roll call, essentially, to get cases ready for trial. O'Brien was wearing a matte-black robe that seemed to absorb the artificial light overhead. He took his seat, scanned the room, and angled himself toward a computer monitor. The court was leanly staffed. There was a judicial clerk but no bailiff or stenographer. Opposite the judge were tables for the prosecution--the Department of Homeland Security--and for the respondent, a succession of immigrants who were applying for asylum. A Spanish interpreter appeared as a faceless box on a big screen. About ten people, all Latino, sat in wooden pews, gripping folders full of esoteric documents.