label ai-generated content
Spreading AI-generated content could lead to expensive fines
AI-generated "deepfake" materials are flooding the internet, sometimes with dangerous results. In just the last year, AI has been used to make deceiving voice clones of a former US president and spread fake, politically-charged images depicting children in natural disasters. Nonconsensual, AI-generated sexual images and videos, meanwhile, are leaving a trail of trauma impacting everyone from high schoolers to Taylor Swift. Large tech companies like Microsoft and Meta have made some efforts to identify instances of AI manipulation but with only muted success. Now, governments are stepping in to try and stem the tide with something they know quite a bit about: fines.
- Europe > Spain (0.08)
- North America > United States > South Dakota (0.07)
- North America > United States > New Hampshire (0.06)
- (7 more...)
Meta plans to more broadly label AI-generated content
Meta says that its current approach to labeling AI-generated content is too narrow and that it will soon apply a "Made with AI" badge to a broader range of videos, audio and images. Starting in May, it will append the label to media when it detects industry-standard AI image indicators or when users acknowledge that they're uploading AI-generated content. The company may also apply the label to posts that fact-checkers flag, though it's likely to downrank content that's been identified as false or altered. The company announced the measure in the wake of an Oversight Board decision regarding a video that was maliciously edited to depict President Joe Biden touching his granddaughter inappropriately. The Oversight Board agreed with Meta's decision not to take down the video from Facebook as it didn't violate the company's rules regarding manipulated media.