Goto

Collaborating Authors

Clearview AI ordered to delete personal data of UK residents

AIHub

The Information Commissioner's Office (ICO) in the UK has fined facial recognition database company Clearview AI Inc more than £7.5m for using images of people that were scraped from websites and social media. Clearview AI collected the data to create a global online database, with one of the resulting applications being facial recognition. Clearview AI have also been ordered to delete personal data they hold on UK residents, and to stop obtaining and using the personal data that is publicly available on the internet. The ICO is the UK's independent authority set up to uphold information rights in the public interest. This action follows an investigation that they carried out in conjunction with the Office of the Australian Information Commissioner (OAIC).


UK and Australian data regulators to probe Clearview AI - Techerati

#artificialintelligence

Clearview's facial recognition software, popular with law enforcement, uses images scraped from the internet and social media Data regulators in the UK and Australia have announced a joint investigation into practices of facial recognition app Clearview AI. The UK Information Commissioner's Office (ICO) and the Office of the Australian Information Commissioner (OAIC) said they are looking into the firm's use of data "scraped" from the internet. Clearview AI uses its facial recognition software to help law enforcement match photos of unknown people to other images online by using the company's database of photos which have been taken from publicly accessible social media platforms, including Facebook, and other websites. The controversial system has raised questions about privacy and consent to data gathering but has been used by a number of law enforcement agencies in the US. A report by Buzzfeed earlier this year also claimed that a number of UK law enforcement agencies had registered with Clearview, including the Metropolitan Police and the National Crime Agency as well as other regional police forces.


Facial Recognition Firm Clearview AI Suffers Data Breach

#artificialintelligence

A controversial facial recognition company has just informed its customers of a data breach in which its entire client list was stolen. Clearview AI leapt to fame in January when a New York Times report claimed that the start-up had scraped up to three billion images from social media sites to add to its database. That makes it a useful resource for its law enforcement clients, which can query images they capture against the trove. The FBI's own database is said to contain little more than 600 million images. Now those clients have been exposed after an unauthorized intruder managed to access the Clearview AI's entire customer list, the number of user accounts those companies have set up, and the number of searches they've carried out.


LAPD Drops Clearview A.I. -- But Not All Facial Recognition

#artificialintelligence

This week, the Los Angeles Police Department told BuzzFeed News that it would stop using Clearview AI, the company that scraped billions of images from the internet, including social media sites, to form a massive searchable database of faces and identities. Reading that story, it's important to keep in mind that despite the headline, L.A. law enforcement is far from giving up facial recognition technology. The police department will still use its existing facial recognition database with more than eight million booking photos run by facial recognition contractor DataWorks Plus. DataWorks Plus sells photo management software that connects to third-party facial recognition algorithms, like those from NEC and Rank One. Last year, OneZero reported that DataWorks Plus was working on bridging these facial recognition databases across California in a service called the California Facial Recognition Interconnect.


Controversial facial recognition company Clearview AI just had its entire client list stolen

#artificialintelligence

In recent months, Clearview AI has been attacked from all sides by lawmakers, tech giants, and privacy advocates for its business practices, which include scraping public images of people from sites like LinkedIn, Venmo, Facebook, and YouTube. Clearview AI's systems then allow clients to search for people in its database using these scraped images. While several law enforcement agencies are known to use Clearview AI's services, the breach of its entire client list may shed some embarrassment on other organizations who are clients of the company that wish to remain unknown. As of now, however, it looks like Clearview AI's client list hasn't been made public--at least not yet. Clearview AI made the disclosure of the breach in an email to clients, saying an intruder "gained unauthorized access" to the client list.