Goto

Collaborating Authors

 Law


Boroux Versus Rorra Countertop Water Filters, Tested Head to Head

WIRED

In a world of plastic water filter pitchers, I tested two of the new generation of stainless-steel filter systems. I will admit that the popularity of those giant, stainless steel, gravity-fed water filters remained a mystery to me for some years--even as multi-gallon water filter systems from brands like British Berkefeld and Berkey seemed to proliferate equally among lovers of doomsday prepping and holistic wellness retreats. I have been testing much different breeds of water filters for more than a year now, including reverse osmosis filters and water pitchers. But often, the big water filter tanks have seemed as much like status symbols as functional items. If you see a big gravity-fed filter, you know the person in question is serious about wellness, survival, or both. What changed my mind about these big stainless steel filters was microplastics . Most water filter pitchers are made of BPA-free plastic. But as new research shows that bottled-water drinkers ingest tens of thousands of excess microplastic particles, wellness lovers have begun to look askance at water filters that are themselves made of plastic.


AI pilot program in L.A. County courts will help judges craft rulings in some cases

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. AI pilot program in L.A. County courts will help judges craft rulings in some cases This is read by an automated voice. Please report any issues or inconsistencies here . A select panel of L.A. County judges now have access to an artificial intelligence tool that can help them summarize motions and draft rulings in civil court. The tool, Learned Hand, is already in use by judges in 10 states, according to the company's CEO.


A principled approach for data bias mitigation

AIHub

How do you know if your data is fair? And if it isn't, what can you do about it? Machine learning models are increasingly used to make high-stakes decisions, from predicting who gets a loan to estimating the likelihood that someone will reoffend. But these models are only as good as the data they learn from [Shahbazi 2023]. If the training data is biased, the model's decisions will likely be biased too [Hort 2024, Pagano 2023].


Trump administration defends Anthropic blacklisting in US court

Al Jazeera

Has Trump failed to sell the Iran war to the world? Are US-Israeli attacks against Iran legal? The administration of United States President Donald Trump has said in a court filing that the Pentagon's blacklisting of Anthropic was justified and lawful, opposing the artificial intelligence company's high-stakes lawsuit challenging the decision. The administration made its comments in a court filing on Tuesday. The Trump administration's filing says Anthropic is unlikely to succeed in its claims that the US government's action violated speech protections under the US Constitution's First Amendment, asserting that the dispute stems from contract negotiations and national security concerns, not retaliation.


L.A. teachers union widely expected to announce strike date at massive Wednesday rally

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. L.A. teachers union widely expected to announce strike date at massive Wednesday rally Members of the largest unions representing teachers and nonteachers participate in joint rally at Grand Park in March 2023. The scene will be repeated on Wednesday, with union members once again on the verge of a strike. This is read by an automated voice. Please report any issues or inconsistencies here .


Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems

WIRED

Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems In response to Anthropic's lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military. The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic's First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company's lawsuit against the government will fail. "The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion," US Department of Justice attorneys wrote. The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon's decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company's technologies from being used inside the department.


Tennessee Teens Sue Elon Musk's xAI Over Child Sexual Abuse Images

Mother Jones

Support journalism that doesn't flinch . Support journalism that doesn't flinch . Elon Musk leaves a meeting with House Republicans in the basement of the US Capitol building on March 5, 2025 in Washington, DC. Get your news from a source that's not owned and controlled by oligarchs. Tennessee teenagers are suing Elon Musk's company xAI over allegations that its artificial intelligence tool Grok undressed photos of them as minors--the latest challenge against the wealthiest living person's chatbot .


Counterfactual Fairness

Neural Information Processing Systems

Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school.


The Human Skill That Eludes AI

The Atlantic - Technology

Why can't language models write well? I n a certain, strange way, generative AI peaked with OpenAI's GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. "You could be like, 'Continue this story:,' and GPT-2 would be like, ','" Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. "The models won't do that anymore." AI leaders boast about their models' superhuman technical abilities.


Senators tell ByteDance to shut down Seedance 2.0 AI video app 'immediately'

Engadget

They said the company'has shown it is willing to... steal the intellectual property ofAmerican creators.' After ByteDance suspended the global rollout of its new Seedance 2.0 AI video generator on the weekend, US senators have now told the company to immediately shut down the app. Seedance 2.0 poses a direct threat to the American intellectual property system and, more broadly, to the constitutional rights and economic livelihoods of our creative community, Senators Marsha Blackburn and Peter Welch wrote in a letter to the company . Responsible global companies follow the law and respect core economic rights, including intellectual property and personal likeness protections, the senators wrote. They cited Seedance AI examples including an AI generated Thanos and Superman battle, a rewritten ending and that famous (fake) Tom Cruise and Brad Pitt battle .