Goto

Collaborating Authors

UK privacy watchdog fines Clearview AI £7.5m and orders UK data to be deleted

ZDNet

Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet. The Information Commissioner's Office (ICO) has fined controversial facial recognition company Clearview AI £7.5 million ($9.4 million) for breaching UK data protection laws and has issued an enforcement notice ordering the company to stop obtaining and using data of UK residents, and to delete the data from its systems. In its finding, the ICO detailed how Clearview AI failed to inform people in the UK that it was collecting their images from the web and social media to create a global online database that could be used for facial recognition; failed to have a lawful reason for collecting people's information; failed to have a process in place to stop the data being retained indefinitely; and failed to meet data protection standards required for biometric data under the General Data Protection Regulation. The ICO also found the company asked for additional personal information, including photos, when asked by members of the public if they were on their database.


UK fines Clearview just under $10M for privacy breaches – TechCrunch

#artificialintelligence

The UK's data protection watchdog has confirmed a penalty for the controversial facial recognition company, Clearview AI -- announcing a fine of just over £7.5 million today for a string of breaches of local privacy laws. The watchdog has also issued an enforcement notice, ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet; and telling it to delete the information of UK residents from its systems. The US company has amassed a database of 20 billion facial images by scraping data off the public internet, such as from social media services, to create an online database that it uses to power an AI-based identity-matching service which it sells to entities such as law enforcement. The problem is Clearview has never asked individuals whether it can use their selfies for that. And in many countries it has been found in breach of privacy laws.


Clearview AI in hot water down under – TechCrunch - MadConsole

#artificialintelligence

After Canada, now Australia has found that controversial facial recognition company, Clearview AI, broke national privacy laws when it covertly collected citizens' facial biometrics and incorporated them into its AI-powered identity matching service -- which it sells to law enforcement agencies and others. In a statement today, Australia's information commissioner and privacy commissioner, Angelene Falk, said Clearview AI's facial recognition tool breached the country's Privacy Act 1988 by: In what looks like a major win for privacy down under, the regulator has ordered Clearview to stop collecting facial biometrics and biometric templates from Australians; and to destroy all existing images and templates that it holds. The Office of the Australian Information Commissioner (OAIC) undertook a joint investigation into Clearview with the UK data protection agency, the Information Commission's Office (IOC). However the UK regulator has yet to announce any conclusions. In a separate statement today -- which possibly reads slightly flustered -- the ICO said it is "considering its next steps and any formal regulatory action that may be appropriate under the UK data protection laws".


UK and Australian data regulators to probe Clearview AI - Techerati

#artificialintelligence

Clearview's facial recognition software, popular with law enforcement, uses images scraped from the internet and social media Data regulators in the UK and Australia have announced a joint investigation into practices of facial recognition app Clearview AI. The UK Information Commissioner's Office (ICO) and the Office of the Australian Information Commissioner (OAIC) said they are looking into the firm's use of data "scraped" from the internet. Clearview AI uses its facial recognition software to help law enforcement match photos of unknown people to other images online by using the company's database of photos which have been taken from publicly accessible social media platforms, including Facebook, and other websites. The controversial system has raised questions about privacy and consent to data gathering but has been used by a number of law enforcement agencies in the US. A report by Buzzfeed earlier this year also claimed that a number of UK law enforcement agencies had registered with Clearview, including the Metropolitan Police and the National Crime Agency as well as other regional police forces.


UK watchdog fines facial recognition firm £7.5m over image collection

The Guardian

The UK's data watchdog has fined a facial recognition company £7.5m for collecting images of people from social media platforms and the web to add to a global database. The Information Commissioner's Office (ICO) also ordered US-based Clearview AI to delete the data of UK residents from its systems. Clearview AI has collected more than 20bn images of people's faces from Facebook, other social media companies and from scouring the web. John Edwards, the UK information commissioner, said Clearview's business model was unacceptable. "Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20bn images," he said. "The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service.