Goto

Collaborating Authors

 synthid


Google's SynthID is the latest tool for catching AI-made content. What is AI 'watermarking' and does it work?

AIHub

Note: only the third panel from the original image has been used here. Last month, Google announced SynthID Detector, a new tool to detect AI-generated content. Google claims it can identify AI-generated content in text, image, video or audio. But there are some caveats. One of them is that the tool is currently only available to "early testers" through a waitlist.


Google's Magic Editor will watermark its AI-tweaked photos

Engadget

Spotting AI's work can be increasingly difficult as its capabilities and subtleties continue to improve. This continued shift makes labeling AI generated work all the more critical -- something that is being done in bits and pieces. The latest development to do so comes from Google, which will now use SynthID technology to mark mages edited using Reimagine in Magic Editor. Google DeepMind launched SynthID in 2023, a technology that allows for imperceptible digital watermarks within any content created with generative AI. The company has previously used it in AI-powered programs such as Lyria, Imagen and Gemini.


Google just open-sourced its AI text detection tool for everyone

PCWorld

PCWorld is on a journey to delve into information that resonates with readers and creates a multifaceted tapestry to convey a landscape of profound enrichment.


Google tool makes AI-generated writing easily detectable

New Scientist

Google has been using artificial intelligence watermarking to automatically identify text generated by the company's Gemini chatbot, making it easier to distinguish AI-generated content from human-written posts. That watermark system could help prevent misuse of the AI chatbots for misinformation and disinformation – not to mention cheating in school and business settings. Now, the tech company is making an open-source version of its technique available so that other generative AI developers can similarly watermark the output from their own large language models, says Pushmeet Kohli at Google DeepMind, the company's AI research team, which combines the former Google Brain and DeepMind labs. "While SynthID isn't a silver bullet for identifying AI-generated content, it is an important building block for developing more reliable AI identification tools," he says. Google creates self-replicating life from digital'primordial soup' Independent researchers voiced similar optimism.


Google DeepMind is making its AI text watermark open source

MIT Technology Review

SynthID introduces additional information at the point of generation by changing the probability that tokens will be generated, explains Kohli. To detect the watermark and determine whether text has been generated by an AI tool, SynthID compares the expected probability scores for words in watermarked and unwatermarked text. Google DeepMind found that using the SynthID watermark did not compromise the quality, accuracy, creativity, or speed of generated text. That conclusion was drawn from a massive live experiment of SynthID's performance after the watermark was deployed in its Gemini products and used by millions of people. Gemini allows users to rank the quality of the AI model's responses with a thumbs-up or a thumbs-down.


Google expands digital watermarks to AI-made video and text

Engadget

As Google starts to make its latest video-generation tools available, the company says it has a plan to ensure transparency around the origins of its increasingly realistic AI-generated clips. All video made by the company's new Veo model in the VideoFX app will have digital watermarks thanks to Google's SynthID system. Furthermore, SynthID will be able to watermark AI-generated text that comes from Gemini. SynthID is Google's digital watermarking system that started rolling out to AI-generated images last year. The tech embeds imperceptible watermarks into AI-made content so that AI detection tools can recognize that the content was generated by AI.


Why Big Tech's watermarking plans are some welcome good news

MIT Technology Review

On February 6, Meta said it was going to label AI-generated images on Facebook, Instagram, and Threads. When someone uses Meta's AI tools to create images, the company will add visible markers to the image, as well as invisible watermarks and metadata in the image file. The company says its standards are in line with best practices laid out by the Partnership on AI, an AI research nonprofit. Big Tech is also throwing its weight behind a promising technical standard that could add a "nutrition label" to images, video, and audio. Called C2PA, it's an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as "provenance" information.


Google wants an invisible digital watermark to bring transparency to AI art

Engadget

Google took a step towards transparency in AI-generated images today. Google DeepMind announced SynthID, a watermarking / identification tool for generative art. The company says the technology embeds a digital watermark, invisible to the human eye, directly onto an image's pixels. SynthID is rolling out first to "a limited number" of customers using Imagen, Google's art generator available on its suite of cloud-based AI tools. One of the many issues with generative art -- apart from the ethical implications of training on artists' work -- is the potential for creating deepfakes. For example, the pope's hot new hip-hop attire (an AI image created with MidJourney) going viral on social media was an early example of what could become more commonplace as generative tools evolve.


The Download: watermarking AI images, and WorldCoin's backlash

MIT Technology Review

The news: Google DeepMind has launched a new watermarking tool which labels whether pictures have been generated with AI. The tool, called SynthID, will allow users to generate images using Google's AI image generator Imagen, then choose whether to add a watermark. Watermarking--a technique where you hide a signal in a piece of text or an image to identify it as AI-generated--has become one of the most popular policy suggestions to curb harms. These new tools could help protect our pictures from AI. PhotoGuard and Glaze are just two new systems designed to make it harder to tinker with photos using AI tools. The finding could strengthen artists' claims that AI companies are infringing their rights.


Google DeepMind has launched a watermarking tool for AI-generated images

MIT Technology Review

Watermarking--a technique where you hide a signal in a piece of text or an image to identify it as AI-generated--has become one of the most popular ideas proposed to curb such harms. In July, the White House announced it had secured voluntary commitments from leading AI companies such as OpenAI, Google, and Meta to develop watermarking tools in an effort to combat misinformation and misuse of AI-generated content. At Google's annual conference I/O in May, CEO Sundar Pichai said the company is building its models to include watermarking and other techniques from the start. Google DeepMind is now the first Big Tech company to publicly launch such a tool. Traditionally images have been watermarked by adding a visible overlay onto them, or adding information into their metadata.