Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet. The Information Commissioner's Office (ICO) has fined controversial facial recognition company Clearview AI £7.5 million ($9.4 million) for breaching UK data protection laws and has issued an enforcement notice ordering the company to stop obtaining and using data of UK residents, and to delete the data from its systems. In its finding, the ICO detailed how Clearview AI failed to inform people in the UK that it was collecting their images from the web and social media to create a global online database that could be used for facial recognition; failed to have a lawful reason for collecting people's information; failed to have a process in place to stop the data being retained indefinitely; and failed to meet data protection standards required for biometric data under the General Data Protection Regulation. The ICO also found the company asked for additional personal information, including photos, when asked by members of the public if they were on their database.
The UK's data protection watchdog has confirmed a penalty for the controversial facial recognition company, Clearview AI -- announcing a fine of just over £7.5 million today for a string of breaches of local privacy laws. The watchdog has also issued an enforcement notice, ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet; and telling it to delete the information of UK residents from its systems. The US company has amassed a database of 20 billion facial images by scraping data off the public internet, such as from social media services, to create an online database that it uses to power an AI-based identity-matching service which it sells to entities such as law enforcement. The problem is Clearview has never asked individuals whether it can use their selfies for that. And in many countries it has been found in breach of privacy laws.
France's foremost privacy regulator has ordered Clearview AI to delete all its data relating to French citizens, as first reported by TechCrunch. In its announcement, the French agency CNIL argued that Clearview had violated the GDPR in collecting the data and violated various other data access rights in its processing and storage. As a result, CNIL is calling on Clearview to purge the data from its systems or face escalating fines as laid out by European privacy law. Clearview rose to prominence in 2020 after a New York Times investigation highlighted the company's massive data collection efforts. In particular, the company offered the unique ability to identify subjects by name, drawing on data scraped from public-facing social networks.
Clearview AI, the controversial startup known for scraping billions of selfies from people's public social network profiles to train a facial-recognition system, may be fined just over £17m ($22.6m) by the UK's Information Commissioner's Office (ICO). The watchdog on Monday publicly mulled punishing Clearview following an investigation launched last year with the Australian Information Commissioner. The ICO believes the US biz broke Britain's data-protection rules by, among other things, failing to have a "lawful reason" for collecting people's personal photos and info, and not being transparent about how the data was used and stored for its facial-recognition applications. Clearview harvests people's photos – 10 billion or more, it's thought – from their public social media profiles, and then builds a face-matching system so that if, say, the police upload a picture of someone from a CCTV still, the software can locate that person in its database and provide officers the corresponding name and online profiles. The images in Clearview AI Inc's database are likely to include the data of a substantial number of people from the UK and may have been gathered without people's knowledge from publicly available information online, including social media platforms.
The data protection authority in Hamburg, Germany, for instance, last week issued a preliminary order saying New York-based Clearview must delete biometric data related to Matthias Marx, a 32-year-old doctoral student. The regulator ordered the company to delete biometric hashes, or bits of code, used to identify photos of Mr. Marx's face, and gave it till Feb. 12 to comply. Not all photos, however, are considered sensitive biometric data under the European Union's 2018 General Data Protection Regulation. The action in Germany is only one of many investigations, lawsuits and regulatory reprimands that Clearview is facing in jurisdictions around the world. On Wednesday, Canadian privacy authorities called the company's practices a form of "mass identification and surveillance" that violated the country's privacy laws.
Facial recognition software has been the subject of debate for a long time now. Despite the controversy, law enforcement has been using such AI-powered software to catch criminals all around the world, and most particularly in large nations with less strict privacy laws. The use is prevalent despite the fact that the software may not work accurately when used on ethnic communities, youngsters and even women. One company which is leading the news headlines these days is Clearview AI founded by an Australian entrepreneur Hoan Ton-That. Although Clearview AI hasn't devised a groundbreaking facial recognition app, what it sells can be deemed as useful to law enforcement agencies.
The European Commission has said it intends to draw up new rules to protect citizens against misuses of artificial intelligence (AI) tech. It likened the current situation to "the Wild West" and said it would focus on "high-risk" cases. But some experts are disappointed that a white paper it published did not provide more details. A leaked draft had suggested a ban on facial recognition's use in public areas would be proposed. Industry Commissioner Thierry Breton suggested the new legislation would be comparable to the General Data Protection Regulation.
The EU's digital and competition chief has said that automated facial recognition breaches GDPR, as the technology fails to meet the regulation's requirement for consent. Margrethe Vestager, the European Commission's executive vice president for digital affairs, told reporters that "as it stands right now, GDPR would say'don't use it', because you cannot get consent," EURACTIV revealed today. GDPR classes information on a person's facial features as biometric data, which is labeled as "sensitive personal data." The use of such data is highly restricted, and typically requires consent from the subject -- unless the processing meets a range of exceptional circumstances. These exemptions include it being necessary for public security.
The European Union won't issue a ban on facial recognition tech, as it once proposed, the Financial Times reports. In a previous draft of a paper on artificial intelligence, the European Commission suggested a five-year moratorium on facial recognition, so that the technology's impact could be studied, noting that it can be inaccurate, used to breach privacy laws and facilitate identity fraud. In a new draft, seen by the Financial Times, that moratorium has been removed. Instead, it seems the European Commission will encourage individual member states to set their own facial recognition rules. The latest draft suggests that independent groups assess each proposed public use of the technology.
France is poised to become the first European country to use facial recognition technology to give citizens a secure digital identity -- whether they want it or not. Saying it wants to make the state more efficient, President Emmanuel Macron's government is pushing through plans to roll out an ID program, dubbed Alicem, in November, earlier than an initial Christmas target. The country's data regulator says the program breaches the European rule of consent and a privacy group is challenging it in France's highest administrative court. It took a hacker just over an hour to break into a "secure" government messaging app this year, raising concerns about the state's security standards. None of that is deterring the French interior ministry.