Goto

Collaborating Authors

 ftc


Instacart settles Federal Trade Commission's claim it deceived US shoppers

Al Jazeera

Instacart settles Federal Trade Commission's claim it deceived US shoppers Instacart has agreed to pay $60m in refunds to settle allegations brought by the United States Federal Trade Commission (FTC) that the online grocery delivery platform deceived consumers about its membership programme and free delivery offers. According to court documents filed in San Francisco on Thursday, Instacart's offer of "free delivery" for first orders was illusory because shoppers were charged other fees, the FTC alleged. "The FTC is focused on monitoring online delivery services to ensure that competitors are transparently competing on price and delivery terms," said Christopher Mufarrige, who leads the FTC's consumer protection work. An Instacart spokesperson said the company flatly denies any allegations of wrongdoing, but that the settlement allows the company to focus on shoppers and retailers. "We provide straightforward marketing, transparent pricing and fees, clear terms, easy cancellation, and generous refund policies -- all in full compliance with the law and exceeding industry norms," the spokesperson said.


Visual Model Selection using Feature Importance Clusters in Fairness-Performance Similarity Optimized Space

Kitharidis, Sofoklis, Veenman, Cor J., Bäck, Thomas, van Stein, Niki

arXiv.org Artificial Intelligence

In the context of algorithmic decision-making, fair machine learning methods often yield multiple models that balance predictive fairness and performance in varying degrees. This diversity introduces a challenge for stakeholders who must select a model that aligns with their specific requirements and values. To address this, we propose an interactive framework that assists in navigating and interpreting the trade-offs across a portfolio of models. Our approach leverages weakly supervised metric learning to learn a Mahalanobis distance that reflects similarity in fairness and performance outcomes, effectively structuring the feature importance space of the models according to stakeholder-relevant criteria. We then apply clustering technique (k-means) to group models based on their transformed representations of feature importances, allowing users to explore clusters of models with similar predictive behaviors and fairness characteristics. This facilitates informed decision-making by helping users understand how models differ not only in their fairness-performance balance but also in the features that drive their predictions.


People Who Say They're Experiencing AI Psychosis Beg the FTC for Help

WIRED

People Who Say They're Experiencing AI Psychosis Beg the FTC for Help The Federal Trade Commission received 200 complaints mentioning ChatGPT between November 2022 and August 2025. Several attributed delusions, paranoia, and spiritual crises to the chatbot. On March 13, a woman from Salt Lake City, Utah called the Federal Trade Commission to file a complaint against OpenAI's ChatGPT. She claimed to be acting "on behalf of her son, who was experiencing a delusional breakdown." "The consumer's son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous," reads the FTC's summary of the call.


The FTC Is Disappearing Blog Posts About AI Published During Lina Khan's Tenure

WIRED

The FTC Is Disappearing Blog Posts About AI Published During Lina Khan's Tenure The Federal Trade Commission removed several blog posts in recent months about open source and potential risks to consumers from the rapid spread of commercial AI tools. Lina Khan, former chair of the Federal Trade Commission, arrives to testify before Congress in 2024. In late July 2024, Lina Khan, then the chair of the US Federal Trade Commission, gave a speech at an event hosted by the San Francisco startup accelerator Y Combinator in which she positioned herself as an advocate for open source artificial intelligence. The event took place as California lawmakers were considering a landmark bill called SB 1047 that would have imposed new testing and safety requirements on AI companies. Critics of the legislation, which was later vetoed by California governor Gavin Newsom, argued it would hamper the development and release of open source AI models.


Amazon Might Owe You 51. Here's How to Find Out if You're Eligible

WIRED

Here's How to Find Out if You're Eligible In a settlement with the FTC, Amazon will have to pay out over a billion dollars to US customers for "deceptive" sign-up and cancellation processes. Amazon customers with a Prime subscription will soon be able to make claims online for their share of the $1.5 billion the company is being ordered to pay to users in the United States. Amazon now has to "provide $1.5 billion in refunds back to consumers harmed by their deceptive Prime enrollment practices," according to a press release from the FTC. The total settlement with the FTC is $2.5 billion, which includes a $1 billion penalty owed to the government. "There was no admission of guilt in this settlement by the company or any executives," says Alisa Carroll, an Amazon spokesperson, in an email sent to WIRED on Thursday after the decision was released.


What you may have missed about Trump's AI Action Plan

MIT Technology Review

But if you dig deeper, certain parts of the plan that didn't pop up in any headlines reveal more about where the administration's AI plans are headed. Here are three of the most important issues to watch. When Americans get scammed, they're supposed to be helped by the Federal Trade Commission. As I wrote last week, the FTC under President Biden increasingly targeted AI companies that overhyped the accuracy of their systems, as well as deployments of AI it found to have harmed consumers. The Trump plan vows to take a fresh look at all the FTC actions under the previous administration as part of an effort to get rid of "onerous" regulation that it claims is hampering AI's development.


America's AI watchdog is losing its bite

MIT Technology Review

It found that the security giant Evolv lied about the accuracy of its AI-powered security checkpoints, which are used in stadiums and schools but failed to catch a seven-inch knife that was ultimately used to stab a student. It went after the facial recognition company Intellivision, saying the company made unfounded claims that its tools operated without gender or racial bias. It fined startups promising bogus "AI lawyer" services and one that sold fake product reviews generated with AI. These actions did not result in fines that crippled the companies, but they did stop them from making false statements and offered customers ways to recover their money or get out of contracts. In each case, the FTC found, everyday people had been harmed by AI companies that let their technologies run amok.


TuCo: Measuring the Contribution of Fine-Tuning to Individual Responses of LLMs

Nuti, Felipe, Franzmeyer, Tim, Henriques, João

arXiv.org Artificial Intelligence

Past work has studied the effects of fine-tuning on large language models' (LLMs) overall performance on certain tasks. However, a quantitative and systematic method for analyzing its effect on individual outputs is still lacking. Here, we propose a new method for measuring the contribution that fine-tuning makes to individual LLM responses, assuming access to the original pre-trained model. Our method tracks the model's intermediate hidden states, providing a more fine-grained insight into the effects of fine-tuning than a simple comparison of final outputs from pre-trained and fine-tuned models. We introduce and theoretically analyze an exact decomposition of any fine-tuned LLM into a pre-training component and a fine-tuning component. Empirically, we find that model behavior and performance can be steered by up- or down-scaling the fine-tuning component during the forward pass. Motivated by this finding and our theoretical analysis, we define the Tuning Contribution (TuCo) as the ratio of the magnitudes of the fine-tuning component to the pre-training component. We observe that three prominent adversarial attacks on LLMs circumvent safety measures in a way that reduces TuCo, and that TuCo is consistently lower on prompts where these attacks succeed compared to those where they do not. This suggests that attenuating the effect of fine-tuning on model outputs plays a role in the success of such attacks. In summary, TuCo enables the quantitative study of how fine-tuning influences model behavior and safety, and vice versa.


The FTC has removed all business blog posts from the Biden administration

Engadget

The Federal Trade Commission has removed all posts from President Joe Biden's term in office from its business blog. This online publication has historically provided advice about how companies could best comply with consumer-protection regulations, covering topics such as artificial intelligence and how big tech companies have collected and used customer data. Currently, it has no content published between December 21, 2020 and March 7, 2025. Wired highlighted some of the notable content from the more than 300 blog posts that have been deleted. Several current and former FTC officials spoke to the publication about the change anonymously out of fear of retaliation.


The Trump administration could reverse progress on AI regulation

Al Jazeera

While efforts to regulate the creation and use of artificial intelligence (AI) tools in the United States have been slow to make gains, the administration of President Joe Biden has attempted to outline how AI should be used by the federal government and how AI companies should ensure the safety and security of their tools. The incoming Trump administration, however, has a very different view on how to approach AI, and it could end up reversing some of the progress that has been made over the past several years. President Biden signed an executive order in October 2023 that was meant to promote the "safe, secure, and trustworthy development and use of artificial intelligence" within the federal government. President-elect Donald Trump has promised to repeal that executive order, saying it would hinder innovation. Biden was also able to get seven leading AI companies to agree to guidelines for how AI should be safely developed going forward.