Privacy in the Age of AI
In January 2020, privacy journalist Kashmir Hill published an article in The New York Times describing Clearview AI--a company that purports to help U.S. law enforcement match photos of unknown people to their online presence through a facial recognition model trained by scraping millions of publicly available face images online.a In 2021, police departments in many different U.S. cities were reported to have used Clearview AI to, for example, identify Black Lives Matter protestors.b In 2022, a California-based artist found that photos she thought to be in her private medical record were included, without her knowledge or consent, in the LAION training dataset that has been used to train Stable Diffusion and Google Imagen.c The artist has a rare medical condition she prefers to keep private and expressed concern about the abuse potential of generative AI technologies having access to her photos. In January 2023, Twitch streamer QTCinderella made an emphatic plea to her followers on Twitter to stop spreading links to an illicit website hosting AI-generated "deep fake" pornography of her and other women influencers.
Oct-20-2023, 18:45:22 GMT
- Country:
- North America > United States > California (0.27)
- Industry:
- Government (0.44)
- Health & Medicine (0.60)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.60)
- Technology: