Goto

Collaborating Authors

Clearview AI slammed for breaching Australians' privacy on numerous fronts

ZDNet

Australia's Information Commissioner has found that Clearview AI breached Australia's privacy laws on numerous fronts, after a bilateral investigation uncovered that the company's facial recognition tool collected Australians' sensitive information without consent and by unfair means. The investigation, conducted by the Office of the Australian Information Commissioner (OAIC) and the UK Information Commissioner's Office (ICO), found that Clearview AI's facial recognition tool scraped biometric information from the web indiscriminately and has collected data on at least 3 billion people. The OAIC also found that some Australian police agency users, who were Australian residents and trialled the tool, searched for and identified images of themselves as well as images of unknown Australian persons of interest in Clearview AI's database. By considering these factors together, Australia's Information Commissioner Angelene Falk concluded that Clearview AI breached Australia's privacy laws by collecting Australians' sensitive information without consent and by unfair means. In her determination [PDF], Falk explained that consent was not provided, even though facial images of affected Australians are already available online, as Clearview AI's intent in collecting this biometric data was ambiguous.


Clearview AI fined £17 million for breaching UK data protection laws

Engadget

The UK's Information Commissioner's Office (ICO) has provisionally fined the facial recognition company Clearview AI £17 million ($22.6 million) for breaching UK data protection laws. It said that Clearview allegedly failed to inform citizens that it was collecting billions of their photos, among other transgressions. It has also (again, provisionally) ordered it to stop further processing of residents' personal data. The regulator said that Clearview apparently failed to process people's data "in a way that they likely expect or that is fair." It also alleged that the company failed to have a lawful reason to collect the data, didn't meet GDPR standards for biometric data, failed to have a process that prevents data from being retained indefinitely and failed to inform UK residents what was happening to their data.


UK fines Clearview AI £7.5M for scraping citizens' data

#artificialintelligence

Clearview AI has been fined £7.5 million by the UK's privacy watchdog for scraping the online data of citizens without their explicit consent. The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world. In November 2021, the UK's Information Commissioner's Office (ICO) imposed a potential fine of just over £17 million on Clearview AI. Today's announcement suggests Clearview AI got off relatively lightly.


UK fines Clearview just under $10M for privacy breaches – TechCrunch

#artificialintelligence

The UK's data protection watchdog has confirmed a penalty for the controversial facial recognition company, Clearview AI -- announcing a fine of just over £7.5 million today for a string of breaches of local privacy laws. The watchdog has also issued an enforcement notice, ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet; and telling it to delete the information of UK residents from its systems. The US company has amassed a database of 20 billion facial images by scraping data off the public internet, such as from social media services, to create an online database that it uses to power an AI-based identity-matching service which it sells to entities such as law enforcement. The problem is Clearview has never asked individuals whether it can use their selfies for that. And in many countries it has been found in breach of privacy laws.


Clearview AI Raises Disquiet at Privacy Regulators

WSJ.com: WSJD - Technology

The data protection authority in Hamburg, Germany, for instance, last week issued a preliminary order saying New York-based Clearview must delete biometric data related to Matthias Marx, a 32-year-old doctoral student. The regulator ordered the company to delete biometric hashes, or bits of code, used to identify photos of Mr. Marx's face, and gave it till Feb. 12 to comply. Not all photos, however, are considered sensitive biometric data under the European Union's 2018 General Data Protection Regulation. The action in Germany is only one of many investigations, lawsuits and regulatory reprimands that Clearview is facing in jurisdictions around the world. On Wednesday, Canadian privacy authorities called the company's practices a form of "mass identification and surveillance" that violated the country's privacy laws.