Meta Will Crack Down on AI-Generated Fakes--but Leave Plenty Undetected

WIRED 

Meta, like other leading tech companies, has spent the past year promising to speed up deployment of generative artificial intelligence. Today it acknowledged it must also respond to the technology's hazards, announcing an expanded policy of tagging AI-generated images posted to Facebook, Instagram, and Threads with warning labels to inform people of their artificial origins. Yet much of the synthetic media likely to appear on Meta's platforms is unlikely to be covered by the new policy, leaving many gaps through which malicious actors could slip. "It's a step in the right direction, but with challenges," says Sam Gregory, program director of the nonprofit Witness, which helps people use technology to support human rights. Meta already labels AI-generated images made using its own generative AI tools with the tag "Imagined with AI," in part by looking for the digital "watermark" its algorithms embed into their output.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found