Goto

Collaborating Authors

 Law


Trump administration defends Anthropic blacklisting in US court

Al Jazeera

Has Trump failed to sell the Iran war to the world? Are US-Israeli attacks against Iran legal? The administration of United States President Donald Trump has said in a court filing that the Pentagon's blacklisting of Anthropic was justified and lawful, opposing the artificial intelligence company's high-stakes lawsuit challenging the decision. The administration made its comments in a court filing on Tuesday. The Trump administration's filing says Anthropic is unlikely to succeed in its claims that the US government's action violated speech protections under the US Constitution's First Amendment, asserting that the dispute stems from contract negotiations and national security concerns, not retaliation.


L.A. teachers union widely expected to announce strike date at massive Wednesday rally

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. L.A. teachers union widely expected to announce strike date at massive Wednesday rally Members of the largest unions representing teachers and nonteachers participate in joint rally at Grand Park in March 2023. The scene will be repeated on Wednesday, with union members once again on the verge of a strike. This is read by an automated voice. Please report any issues or inconsistencies here .


Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems

WIRED

Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems In response to Anthropic's lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military. The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic's First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company's lawsuit against the government will fail. "The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion," US Department of Justice attorneys wrote. The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon's decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company's technologies from being used inside the department.


Tennessee Teens Sue Elon Musk's xAI Over Child Sexual Abuse Images

Mother Jones

Support journalism that doesn't flinch . Support journalism that doesn't flinch . Elon Musk leaves a meeting with House Republicans in the basement of the US Capitol building on March 5, 2025 in Washington, DC. Get your news from a source that's not owned and controlled by oligarchs. Tennessee teenagers are suing Elon Musk's company xAI over allegations that its artificial intelligence tool Grok undressed photos of them as minors--the latest challenge against the wealthiest living person's chatbot .


Counterfactual Fairness

Neural Information Processing Systems

Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school.


The Human Skill That Eludes AI

The Atlantic - Technology

Why can't language models write well? I n a certain, strange way, generative AI peaked with OpenAI's GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. "You could be like, 'Continue this story:,' and GPT-2 would be like, ','" Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. "The models won't do that anymore." AI leaders boast about their models' superhuman technical abilities.


Senators tell ByteDance to shut down Seedance 2.0 AI video app 'immediately'

Engadget

They said the company'has shown it is willing to... steal the intellectual property ofAmerican creators.' After ByteDance suspended the global rollout of its new Seedance 2.0 AI video generator on the weekend, US senators have now told the company to immediately shut down the app. Seedance 2.0 poses a direct threat to the American intellectual property system and, more broadly, to the constitutional rights and economic livelihoods of our creative community, Senators Marsha Blackburn and Peter Welch wrote in a letter to the company . Responsible global companies follow the law and respect core economic rights, including intellectual property and personal likeness protections, the senators wrote. They cited Seedance AI examples including an AI generated Thanos and Superman battle, a rewritten ending and that famous (fake) Tom Cruise and Brad Pitt battle .


Tennessee minors sue Musk's xAI, alleging Grok generated sexual images of them

The Japan Times

Tennessee minors sue Musk's xAI, alleging Grok generated sexual images of them Governments and regulators around the world have launched probes into xAI, imposed bans and demanded safeguards in a growing push to curb illegal and offensive material. Three Tennessee plaintiffs, including two minors, sued Elon Musk's xAI on Monday, alleging that it knowingly designed its Grok image generator to let people create sexually explicit content by using real photos of others. The lawsuit, filed in the San Jose, California federal court, is seeking class-action status for people in the United States who were reasonably identifiable in sexualized images or videos generated by Grok based on real images of themselves. The artificial intelligence company did not immediately respond to a request for comment. After an outcry over sexually explicit content generated by the chatbot, xAI said in January that it had blocked all users from editing images of real people in revealing clothing and from generating images of people in revealing clothing in jurisdictions where it's illegal. Governments and regulators around the world have also since launched probes, imposed bans and demanded safeguards in a growing push to curb illegal and offensive material.


Equality of Opportunity in Classification: A Causal Approach

Neural Information Processing Systems

The Equalized Odds (for short, EO) is one of the most popular measures of discrimination used in the supervised learning setting. It ascertains fairness through the balance of the misclassification rates (false positive and negative) across the protected groups -- e.g., in the context of law enforcement, an African-American defendant who would not commit a future crime will have an equal opportunity of being released, compared to a non-recidivating Caucasian defendant. Despite this noble goal, it has been acknowledged in the literature that statistical tests based on the EO are oblivious to the underlying causal mechanisms that generated the disparity in the first place (Hardt et al. 2016). This leads to a critical disconnect between statistical measures readable from the data and the meaning of discrimination in the legal system, where compelling evidence that the observed disparity is tied to a specific causal process deemed unfair by society is required to characterize discrimination. The goal of this paper is to develop a principled approach to connect the statistical disparities characterized by the EO and the underlying, elusive, and frequently unobserved, causal mechanisms that generated such inequality. We start by introducing a new family of counterfactual measures that allows one to explain the misclassification disparities in terms of the underlying mechanisms in an arbitrary, non-parametric structural causal model. This will, in turn, allow legal and data analysts to interpret currently deployed classifiers through causal lens, linking the statistical disparities found in the data to the corresponding causal processes. Leveraging the new family of counterfactual measures, we develop a learning procedure to construct a classifier that is statistically efficient, interpretable, and compatible with the basic human intuition of fairness. We demonstrate our results through experiments in both real (COMPAS) and synthetic datasets.


U.S. court rules against South Korean gaming firm over AI-hatched takeover plan

The Japan Times

A U.S. judge has ordered South Korean game developer Krafton to reinstate the head of one of its video game studios after ruling that he had been improperly removed as part of a takeover plan hatched by ChatGPT. WILMINGTON, DELAWARE - A Delaware judge on Monday ordered that South Korean game developer Krafton reinstate the head of one of its video game studios, ruling he had been improperly removed as part of a takeover plan hatched by ChatGPT. Krafton CEO Changhan Kim had largely followed the advice of artificial intelligence tool ChatGPT during a $250 million dispute with the leaders of the Subnautica game maker Unknown Worlds Entertainment, which Krafton had acquired, according to the ruling by Vice Chancellor Lori Will of the Court of Chancery in Delaware. Businesses and governments are scrambling for new ways to use AI, and the technology has been blamed for mass layoffs, fears of autonomous weapons and concerns about civil rights. Companies caught in takeover-related legal battles often spend millions of dollars on teams of attorneys and advisers from top-flight Wall Street firms. In a time of both misinformation and too much information, quality journalism is more crucial than ever.