Goto

Collaborating Authors

clearview ai


The AI 'gold rush' in Washington

#artificialintelligence

AI's little guys are getting into the Washington influence game. Tech giants and defense contractors have long dominated AI lobbying, seeking both money and favorable rules. And while the largest companies still dominate the debate, pending legislation in Congress aimed at getting ahead of China on innovation, along with proposed bills on data privacy, have caused a spike in lobbying by smaller AI players. A number of companies focused on robotics, drones and self-driving cars are all setting up their own Washington influence machines, positioning them to shape the future of AI policy to their liking. A lot of it is spurred by one major piece of legislation: The Bipartisan Innovation Act, commonly referred to as USICA -- an acronym for its previous title, and its goal to out-innovate China.


It's about time facial recognition tech firms took a look in the mirror John Naughton

The Guardian

Last week, the UK Information Commissioner's Office (ICO) slapped a £7.5m fine on a smallish tech company called Clearview AI for "using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition". The ICO also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet and to delete the data of UK residents from its systems. Since Clearview AI is not exactly a household name some background might be helpful. It's a US outfit that has "scraped" (ie digitally collected) more than 20bn images of people's faces from publicly available information on the internet and social media platforms all over the world to create an online database. The company uses this database to provide a service that allows customers to upload an image of a person to its app, which is then checked for a match against all the images in the database.


An AI Company Scraped Billions of Photos For Facial Recognition. Regulators Can't Stop It

TIME - Tech

More and more privacy watchdogs around the world are standing up to Clearview AI, a U.S. company that has collected billions of photos from the internet without people's permission. The company, which uses those photos for its facial recognition software, was fined £7.5 million ($9.4 million) by a U.K. regulator on May 26. The U.K. Information Commissioner's Office (ICO) said the firm, Clearview AI, had broken data protection law. The company denies breaking the law. But the case reveals how nations have struggled to regulate artificial intelligence across borders. Facial recognition tools require huge quantities of data.


Engadget Podcast: Clearview AI's facial recognition is on the ropes

Engadget

Will a few fines put a stop to the company's facial recognition search platform? Also, they discuss how Clearview's troubles relate to countries being more restrictive about data in general. Finally, they pour one out for Seth Green's lost Bored Ape – RIP NFT! Listen above, or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcasts, the Morning After and Engadget News!


The Download: Clearview AI's hefty fine, and countries' monkeypox preparation

MIT Technology Review

Controversial facial recognition company Clearview AI has been fined more than $10 million by the UK's data protection watchdog for collecting the faces of UK citizens from the web and social media. The firm was also ordered to delete all of the data it holds on UK citizens. The move by the UK's Information Commissioner's Office (ICO) is the latest in a string of high-profile fines against the company as data protection authorities around the world eye tougher restrictions on its practices. Clearview AI boasts one of the world's largest databases of 20 billion images of people's faces that it has scraped off the internet from publicly available sources, such as social media, without their consent. Clients such as police departments pay for access to the database to look for matches.


Clearview AI ordered to delete personal data of UK residents

AIHub

The Information Commissioner's Office (ICO) in the UK has fined facial recognition database company Clearview AI Inc more than £7.5m for using images of people that were scraped from websites and social media. Clearview AI collected the data to create a global online database, with one of the resulting applications being facial recognition. Clearview AI have also been ordered to delete personal data they hold on UK residents, and to stop obtaining and using the personal data that is publicly available on the internet. The ICO is the UK's independent authority set up to uphold information rights in the public interest. This action follows an investigation that they carried out in conjunction with the Office of the Australian Information Commissioner (OAIC).


The walls are closing in on Clearview AI

MIT Technology Review

The ICO found that Clearview AI had been in breach of data protection laws, collected personal data without people's consent, and asked for additional information, such as photos, when people asked if they were in the database. It found that this may have "acted as a disincentive" for people who objected to their data being scraped. "The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable," said John Edwards, the UK's information commissioner, in a statement. Clearview AI boasts one of the world's largest databases of people's faces, with 20 billion images that it has scraped off the internet from publicly available sources, such as social media, without their consent.


Clearview AI fined in UK for Illegally storing Images

#artificialintelligence

The Information Commissioner's Office (ICO) of the UK fined United States-based facial recognition firm Clearview AI £7.5 million for illegally storing images. The much controversial company has been facing such issues for some time, and this new development is yet another hit for Clearview AI. This fine has been imposed on the company for its practice of collecting and storing images of citizens from social media platforms without their consent, which is a severe threat to privacy according to several countries. Moreover, the ICO has also ordered the US firm to remove UK citizens' data from its systems. According to the ICO, Clearview AI has stored more than 20 billion pictures of people in its database.


UK fines Clearview AI £7.5M for scraping citizens' data

#artificialintelligence

Clearview AI has been fined £7.5 million by the UK's privacy watchdog for scraping the online data of citizens without their explicit consent. The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world. In November 2021, the UK's Information Commissioner's Office (ICO) imposed a potential fine of just over £17 million on Clearview AI. Today's announcement suggests Clearview AI got off relatively lightly.


UK privacy watchdog fines Clearview AI £7.5m and orders UK data to be deleted

ZDNet

Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet. The Information Commissioner's Office (ICO) has fined controversial facial recognition company Clearview AI £7.5 million ($9.4 million) for breaching UK data protection laws and has issued an enforcement notice ordering the company to stop obtaining and using data of UK residents, and to delete the data from its systems. In its finding, the ICO detailed how Clearview AI failed to inform people in the UK that it was collecting their images from the web and social media to create a global online database that could be used for facial recognition; failed to have a lawful reason for collecting people's information; failed to have a process in place to stop the data being retained indefinitely; and failed to meet data protection standards required for biometric data under the General Data Protection Regulation. The ICO also found the company asked for additional personal information, including photos, when asked by members of the public if they were on their database.