Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet. The Information Commissioner's Office (ICO) has fined controversial facial recognition company Clearview AI £7.5 million ($9.4 million) for breaching UK data protection laws and has issued an enforcement notice ordering the company to stop obtaining and using data of UK residents, and to delete the data from its systems. In its finding, the ICO detailed how Clearview AI failed to inform people in the UK that it was collecting their images from the web and social media to create a global online database that could be used for facial recognition; failed to have a lawful reason for collecting people's information; failed to have a process in place to stop the data being retained indefinitely; and failed to meet data protection standards required for biometric data under the General Data Protection Regulation. The ICO also found the company asked for additional personal information, including photos, when asked by members of the public if they were on their database.
Clearview AI has been fined £7.5 million by the UK's privacy watchdog for scraping the online data of citizens without their explicit consent. The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world. In November 2021, the UK's Information Commissioner's Office (ICO) imposed a potential fine of just over £17 million on Clearview AI. Today's announcement suggests Clearview AI got off relatively lightly.
After Canada, now Australia has found that controversial facial recognition company, Clearview AI, broke national privacy laws when it covertly collected citizens' facial biometrics and incorporated them into its AI-powered identity matching service -- which it sells to law enforcement agencies and others. In a statement today, Australia's information commissioner and privacy commissioner, Angelene Falk, said Clearview AI's facial recognition tool breached the country's Privacy Act 1988 by: In what looks like a major win for privacy down under, the regulator has ordered Clearview to stop collecting facial biometrics and biometric templates from Australians; and to destroy all existing images and templates that it holds. The Office of the Australian Information Commissioner (OAIC) undertook a joint investigation into Clearview with the UK data protection agency, the Information Commission's Office (IOC). However the UK regulator has yet to announce any conclusions. In a separate statement today -- which possibly reads slightly flustered -- the ICO said it is "considering its next steps and any formal regulatory action that may be appropriate under the UK data protection laws".
More and more privacy watchdogs around the world are standing up to Clearview AI, a U.S. company that has collected billions of photos from the internet without people's permission. The company, which uses those photos for its facial recognition software, was fined £7.5 million ($9.4 million) by a U.K. regulator on May 26. The U.K. Information Commissioner's Office (ICO) said the firm, Clearview AI, had broken data protection law. The company denies breaking the law. But the case reveals how nations have struggled to regulate artificial intelligence across borders. Facial recognition tools require huge quantities of data.
The UK's Information Commissioner's Office (ICO) has provisionally fined the facial recognition company Clearview AI £17 million ($22.6 million) for breaching UK data protection laws. It said that Clearview allegedly failed to inform citizens that it was collecting billions of their photos, among other transgressions. It has also (again, provisionally) ordered it to stop further processing of residents' personal data. The regulator said that Clearview apparently failed to process people's data "in a way that they likely expect or that is fair." It also alleged that the company failed to have a lawful reason to collect the data, didn't meet GDPR standards for biometric data, failed to have a process that prevents data from being retained indefinitely and failed to inform UK residents what was happening to their data.