Communications
Congress passes 'Take It Down' Act to fight AI-fueled deepfake pornography
Congress has passed a bill that forces tech companies to take action against certain deepfakes and revenge porn posted on their platforms. In a 409-2 vote on Monday, the U.S. House of Representatives passed the "Take It Down" Act, which has received bipartisan support. The bill also received vocal support from celebrities and First Lady Melania Trump. The bill already passed the Senate in a vote last month. The Take It Down Act will now be sent to President Donald Trump, who is expected to sign it into law.
Meta has finally launched its ChatGPT competitor
In the thick of Meta's first-ever AI developer conference, LlamaCon, CEO Mark Zuckerberg hit the launch button on Meta AI -- the company's full-throttle answer to OpenAI's ChatGPT. According to Meta's April 29 press release, the new standalone AI app is built on the company's latest Llama 4 model. It's pitched as a hyper-personalized assistant for users already living inside the Meta ecosystem: WhatsApp, Instagram, Facebook -- you know the drill. In a choice Instagram video, Zuckerberg, framed in a pair of Meta Ray-Ban glasses, pitched the app as a product built for voice-first conversations. One standout feature is the Discover feed.
Meta's new AI app delivers a chatbot with a social media twist
Meta has unveiled a new AI app that combines chatbot features with a social media experience. Launched on Tuesday for the iPhone, iPad, and Android devices, the Meta AI app offers most of the usual aspects of an AI app. You can get information, generate content, and analyze photos and other images. "Meta AI is built to get to know you, so its answers are more helpful," Meta said in a news release. "It's easy to talk to, so it's more seamless and natural to interact with. It's more social, so it can show you things from the people and places you care about. And you can use Meta AI's voice features while multitasking and doing other things on your device, with a visible icon to let you know when the microphone is in use."
Meta has a plan to bring AI to WhatsApp chats without breaking privacy
As Meta's first-ever generative AI conference gets underway, the company is also previewing a significant update on its plans to bring AI features to WhatsApp chats. Buried in its LlamaCon updates, the company shared that it's working on something called "Private Processing," which will allow users to take advantage of generative AI capabilities within WhatsApp without eroding its privacy features. According to Meta, Private Processing is an "optional capability" that will enable people to "leverage AI capabilities for things like summarizing unread messages or refining them, while keeping messages private." WhatsApp, of course, is known for its strong privacy protections and end-to-end encryption. That would seem incompatible with cloud-based AI features like Meta AI.
WhatsApp Is Walking a Tightrope Between AI Features and Privacy
The end-to-end encrypted communication app WhatsApp, used by roughly 3 billion people around the world, will roll out cloud-based AI capabilities in the coming weeks that are designed to preserve WhatsApp's defining security and privacy guarantees while offering users access to message summarization and composition tools. Meta has been incorporating generative AI features across its services that are built on its open source large language model, Llama. And WhatsApp already incorporates a light blue circle that gives users access to the Meta AI assistant. But many users have balked at this addition, given that interactions with the AI assistant aren't shielded from Meta the way end-to-end encrtyped WhatsApp chats are. The new feature, dubbed Private Processing, is meant to address these concerns with what the company says is a carefully architected and purpose-built platform devoted to processing data for AI tasks without the information being accessible to Meta, WhatsApp, or any other party.
Researchers secretly experimented on Reddit users with AI-generated comments
A group of researchers covertly ran a months-long "unauthorized" experiment in one of Reddit's most popular communities using AI-generated comments to test the persuasiveness of large language models. The experiment, which was revealed over the weekend by moderators of r/changemyview, is described by Reddit mods as "psychological manipulation" of unsuspecting users. "The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users," the subreddit's moderators wrote in a lengthy post notifying Redditors about the research. "This experiment deployed AI-generated comments to study how AI could be used to change views." The researchers used LLMs to create comments in response to posts on r/changemyview, a subreddit where Reddit users post (often controversial or provocative) opinions and request debate from other users.
Reddit users were subjected to AI-powered experiment without consent
Reddit users who were unwittingly subjected to an AI-powered experiment have hit back at scientists for conducting research on them without permission – and have sparked a wider debate about such experiments. The social media site Reddit is split into "subreddits" dedicated to a particular community, each with its own volunteer moderators. Members of one subreddit called r/ChangeMyView, because it invites people to discuss potentially contentious issues, were recently informed by the moderators that researchers at the University of Zurich, Switzerland, had been using the site as an online laboratory. The team's experiment seeded more than 1700 comments generated by a variety of large language models (LLMs) into the subreddit, without disclosing they weren't real, to gauge people's reactions. These comments included ones mimicking people who had been raped or pretending to be a trauma counsellor specialising in abuse, among others.
US Congress passes 'Take It Down' revenge porn bill that also covers AI deepfakes
The US House of Representatives has passed the Take It Down Act, a bipartisan bill that criminalizes the "publication of non-consensual, sexually exploitative images," including AI-generated deepfakes that depict "identifiable, real people." It would also compel platforms, such as social networks, to remove those images within 48 hours of being notified. The bill enjoyed overwhelming support in Congress and was cleared for approval by President Trump with a vote of 409 to 2. It passed Senate unanimously in February, and Trump, who previously talked about it while addressing Congress, is expected to sign the bill into law. Nearly every state in the country has its own laws revolving around revenge porn, and there are 20 states that already have laws that cover deepfakes. Take It Down's authors, who include Senator Ted Cruz, explained that those laws "vary in classification of crime and penalty and have uneven criminal prosecution." Victims are also still having a tough time getting their images removed under those laws.
The Download: China's manufacturers' viral moment, and how AI is changing creativity
Since the video was posted earlier this month, millions of TikTok users have watched as a young Chinese man in a blue T-shirt sits beside a traditional tea set and speaks directly to the camera in accented English: "Let's expose luxury's biggest secret." He stands and lifts what looks like an Hermès Birkin bag, one of the world's most exclusive and expensive handbags, before gesturing toward the shelves filled with more bags behind him. "You recognize them: Hermès, Louis Vuitton, Prada, Gucci--all crafted in our workshops." He ends by urging viewers to buy directly from his factory. Video "exposés" like this--where a sales agent breaks down the material cost of luxury goods, from handbags to perfumes to appliances--are everywhere on TikTok right now.