AI models spit out photos of real people and copyrighted images

MIT Technology Review 

These image-generating AI models are trained on vast data sets consisting of images with text descriptions that have been scraped from the internet. The latest generation of the technology works by taking images in the data set and changing one pixel at a time until the original image is nothing but a collection of random pixels. The AI model then reverses the process to make the pixelated mess into a new image. The paper is the first time researchers have managed to prove that these AI models memorize images in their training sets, says Ryan Webster, a PhD student at the University of Caen Normandy in France, who has studied privacy in other image generation models but was not involved in the research. This could have implications for startups wanting to use generative AI models in health care, because it shows that these systems risk leaking sensitive private information.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found