c2pa
Adobe wants to make it easier for artists to blacklist their work from AI scraping
Content credentials are based on C2PA, an internet protocol that uses cryptography to securely label images, video, and audio with information clarifying where they came from--the 21st-century equivalent of an artist's signature. Although Adobe had already integrated the credentials into several of its products, including Photoshop and its own generative AI model Firefly, Adobe Content Authenticity allows creators to apply them to content regardless of whether it was created using Adobe tools. The company is launching a public beta in early 2025. The new app is a step in the right direction toward making C2PA more ubiquitous and could make it easier for creators to start adding content credentials to their work, says Claire Leibowicz, head of AI and media integrity at the nonprofit Partnership on AI. "I think Adobe is at least chipping away at starting a cultural conversation, allowing creators to have some ability to communicate more and feel more empowered," she says. "But whether or not people actually respond to the'Do not train' warning is a different question."
TikTok to auto-flag AI videos – even if created on other platforms
TikTok will flag users who upload artificial intelligence-generated content (AIGC) to the video-sharing site from other platforms, the company says, becoming the first big video site to automatically label such content for users to see. Content created using TikTok's own AI tools is already automatically marked as such to viewers, and the company has required creators to manually add the same labels to their own content, but until now they have been able to evade the rules and pass off generated material as authentic by uploading it from other platforms. Now, the company will begin using digital watermarks created by the cross-industry group Coalition for Content Provenance and Authenticity (C2PA) to identify and label as much AIGC as it can. "AI enables incredible creative opportunities but can confuse or mislead viewers if they don't know content was AI-generated," said Adam Presser, the head of operations and trust and safety at TikTok. "Labelling helps make that context clear – which is why we label AIGC made with TikTok AI effects, and have required creators to label realistic AIGC for over a year."
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
Why Big Tech's watermarking plans are some welcome good news
On February 6, Meta said it was going to label AI-generated images on Facebook, Instagram, and Threads. When someone uses Meta's AI tools to create images, the company will add visible markers to the image, as well as invisible watermarks and metadata in the image file. The company says its standards are in line with best practices laid out by the Partnership on AI, an AI research nonprofit. Big Tech is also throwing its weight behind a promising technical standard that could add a "nutrition label" to images, video, and audio. Called C2PA, it's an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as "provenance" information.
Meta plans to ramp up labeling of AI-generated images across its platforms
Meta plans to ramp up its labeling of AI-generated images across Facebook, Instagram and Threads to help make it clear that the visuals are artificial. It's part of a broader push to tamp down misinformation and disinformation, which is particularly significant as we wrangle with the ramifications of generative AI (GAI) in a major election year in the US and other countries. According to Meta's president of global affairs, Nick Clegg, the company has been working with partners from across the industry to develop standards that include signifiers that an image, video or audio clip has been generated using AI. "Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads," Clegg wrote in a Meta Newsroom post. "We're building this capability now, and in the coming months we'll start applying labels in all languages supported by each app."
- Media > News (0.91)
- Government > Regional Government > North America Government > United States Government (0.36)
The Download: military personnel data for sale, and AI watermarking
For as little as $0.12 per record, data brokers in the US are selling sensitive private data about both active-duty military members and veterans, including their names, addresses, geolocation, net worth, and religion, and information about their children and health conditions. In an unsettling study published today, researchers from Duke University approached 12 data brokers and purchased thousands of records about American service members with minimal vetting. The study highlights the extreme privacy and national security risks created by data brokers. These companies are part of a shadowy multibillion-dollar industry that collects, aggregates, buys, and sells data, practices that are currently legal in the US, exacerbating the erosion of personal and consumer privacy. Last week, President Biden released his executive order on AI, a sweeping set of rules and guidelines designed to improve AI safety and security.
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.81)
The race to find a better way to label AI
With the boom of AI-generated text, images, and videos, both lawmakers and average internet users have been calling for more transparency. Though it might seem like a very reasonable ask to simply add a label (which it is), it is not actually an easy one, and the existing solutions, like AI-powered detection and watermarking, have some serious pitfalls. As my colleague Melissa Heikkilä has written, most of the current technical solutions "don't stand a chance against the latest generation of AI language models." Nevertheless, the race to label and detect AI-generated content is on. That's where this protocol comes in.
Cryptography may offer a solution to the massive AI-labeling problem
Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam; Andrew Jenks, the chair of C2PA, says that membership has increased 56% in the past six months. The major media platform Shutterstock has joined as a member and announced its intention to use the protocol to label all its AI-generated content, including its DALL-E-powered AI image generator. Sejal Amin, chief technology officer at Shutterstock, told MIT Technology Review in an email that the company is protecting artists and users by "supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist's creation versus AI-generated or modified art." Microsoft, Intel, Adobe, and other major tech companies started working on C2PA in February 2021, hoping to create a universal internet protocol that would allow content creators to opt in to labeling their visual and audio content with information about where it came from. Crucially, the project is designed to be adaptable and functional across the internet, and the base computer code is accessible and free to anyone.
Microsoft will ID its AI art with a hidden watermark
Artists concerned about others passing off AI-generated art as their own will now be able to breathe a bit easier: Microsoft has agreed to sign all AI art that its apps generate with a cryptographic watermark indicating it was made with an algorithm. The Coalition for Content Provenance and Authority (C2PA) began work in 2021 to develop an open standard for indicating the origin of digital images, and whether they were authentic or AI-generated. The issue was thrust into the spotlight in March, when AI-generated images of the Pope in a stylish puffy jacket went viral, and AI-art generator Midjourney clamped down to prevent even more. Microsoft, a founding member of the C2PA, will announce at its Microsoft Build developer conference this week that it will cryptographically sign AI-generated images from Bing Image Creator and Microsoft Designer. Images made with Bing Image Creator already include a small "b" for the Bing logo in the bottom right-hand corner.
Deepfakes: Microsoft and others in big tech are working to bring authenticity to videos, photos
Great (or terrifying) moments in deepfake history: The argument about whether a video of President Joe Biden talking to reporters on the South Lawn of the White House was real (it was). The Dutch, British and Latvian MPs convinced their Zoom conference with the chief of staff of the Russian opposition leader Alexei Navalny was a deepfake. A special effects expert who made their friend look exactly like Tom Cruise for a TikTok video ironically designed to alert people to the dangers of fake footage. Product placement being digitally added to old videos and movies, and Anthony Bourdain's recreated voice speaking in a documentary. A mother creating fake videos of the other members of her daughter's cheerleading squad behaving badly in an attempt to get them kicked off the team.
- North America > United States (0.54)
- Europe > United Kingdom > England (0.06)
- North America > Canada (0.04)
- Asia > Azerbaijan (0.04)
- Media (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (0.54)