clearview
A Controversial Facial-Recognition Company Quietly Expands Into Latin America
For the past three months, a small encrypted group chat of Latin American officials who investigate online child-exploitation cases has been lighting up with reports of raids, arrests, and rescued minors in half a dozen countries. The successes are the result of a recent trial of a facial-recognition tool given to a group of Latin American law-enforcement officials, investigators, and prosecutors by the American company Clearview AI. During a five-day operation in Ecuador in early March, participants from 10 countries including Argentina, Brazil, Colombia, the Dominican Republic, El Salvador, and Peru were given access to Clearview's technology, which allows them to upload images and run them through a database of billions of public photos scraped from the Internet. "Normally it takes at least several days for a child to be identified, and sometimes there are victims that have not been identified for years," says Guillermo Galarza Abizaid, the vice president in charge of partnerships and law enforcement at the Virginia-based nonprofit International Centre for Missing and Exploited Children (ICMEC), which organized the event. The group used the facial-recognition tool to analyze a total of 2,198 images and 995 videos, hundreds of them from cold cases.
- North America > Central America (0.47)
- South America > Colombia (0.26)
- South America > Brazil (0.26)
- (15 more...)
Ukraine's 'Secret Weapon' Against Russia Is a Controversial U.S. Tech Company
Leonid Tymchenko spent the first month of Russia's invasion sitting in his dark government office after curfew. Unable to go home, Ukraine's Deputy Minister of Internal Affairs scrolled through Telegram, looking at thousands of videos and images of advancing Russian soldiers. When Tymchenko was offered a chance to test a new facial-recognition tool, he uploaded some of the photos to try it out. He could not believe the results. Every time Tymchenko added a photo of a Russian soldier, the software, made by the American facial-recognition company Clearview AI, seemed to come back with an exact hit, linking to pages that revealed the soldier's name, hometown, and social-media profile.
- Asia > Russia (0.87)
- Europe > Russia (0.63)
- Europe > Ukraine > Kyiv Oblast > Kyiv (0.06)
- (7 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (0.89)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.49)
Why a Social License is Needed for AI
If business wants to use AI at scale, adhering to the technical guidelines for responsible AI development isn't enough. It must obtain society's explicit approval to deploy the technology. Six years ago, in March 2016, Microsoft Corporation launched an experimental AI-based chatbot, TayTweets, whose Twitter handle was @TayandYou. Tay, an acronym for "thinking about you," mimicked a 19-year-old American girl online, so the digital giant could showcase the speed at which AI can learn when it interacts with human beings. Living up to its description as "AI with zero chill," Tay started off replying cheekily to Twitter users and turning photographs into memes. Some topics were off limits, though; Microsoft had trained Tay not to comment on societal issues such as Black Lives Matter. Soon enough, a group of Twitter users targeted Tay with a barrage of tweets about controversial issues such as the Holocaust and Gamergate. They goaded the chatbot into replying with racist and sexually charged responses, exploiting its repeat-after-me capability. Realizing that Tay was reacting like IBM's Watson, which started using profanity after perusing the online Urban Dictionary, Microsoft was quick to delete the first inflammatory tweets. Less than 16 hours and more than 100,000 tweets later, the digital giant shut down Tay.
- Europe > United Kingdom (0.28)
- North America > United States (0.28)
- Atlantic Ocean (0.14)
- Africa (0.14)
- Materials (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- (5 more...)
Clearview Stole My Face and the EU Can't Do Anything About It
Matthias Marx says his face has been stolen. The German activist's visage is pale and wide, topped with messy, blond hair. So far, these features have been mapped and monetized by three companies without his permission. As has happened to billions of others, his face has been turned into a search term without his consent. In 2020 Marx read about Clearview AI, a company that says it has scraped billions of photos from the internet to create a huge database of faces. By uploading a single photo, Clearview's clients, which include law enforcement agencies, can use the company's facial recognition technology to unearth other online photos featuring the same face.
France fines Clearview AI maximum possible for GDPR breaches
Clearview AI, the controversial facial recognition firm that scrapes selfies and other personal data off the Internet without consent to feed an AI-powered identity-matching service it sells to law enforcement and others, has been hit with another fine in Europe. This one comes after it failed to respond to an order last year from the CNIL, France's privacy watchdog, to stop its unlawful processing of French citizens' information and delete their data. Clearview responded to that order by, well, ghosting the regulator -- thereby adding a third GDPR breach (non-cooperation with the regulator) to its earlier tally.
- Europe > France (0.64)
- North America > United States > Illinois (0.05)
- Europe > Sweden (0.05)
- (2 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
Why AI Needs a Social License
If business wants to use AI at scale, adhering to the technical guidelines for responsible AI development isn't enough. It must obtain society's explicit approval to deploy the technology. Six years ago, in March 2016, Microsoft Corporation launched an experimental AI-based chatbot, TayTweets, whose Twitter handle was @TayandYou. Tay, an acronym for "thinking about you," mimicked a 19-year-old American girl online, so the digital giant could showcase the speed at which AI can learn when it interacts with human beings. Living up to its description as "AI with zero chill," Tay started off replying cheekily to Twitter users and turning photographs into memes. Some topics were off limits, though; Microsoft had trained Tay not to comment on societal issues such as Black Lives Matter. Soon enough, a group of Twitter users targeted Tay with a barrage of tweets about controversial issues such as the Holocaust and Gamergate. They goaded the chatbot into replying with racist and sexually charged responses, exploiting its repeat-after-me capability. Realizing that Tay was reacting like IBM's Watson, which started using profanity after perusing the online Urban Dictionary, Microsoft was quick to delete the first inflammatory tweets. Less than 16 hours and more than 100,000 tweets later, the digital giant shut down Tay.
- Europe > United Kingdom (0.28)
- North America > United States (0.28)
- Atlantic Ocean (0.14)
- Africa (0.14)
- Materials (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- (5 more...)
It's about time facial recognition tech firms took a look in the mirror John Naughton
Last week, the UK Information Commissioner's Office (ICO) slapped a £7.5m fine on a smallish tech company called Clearview AI for "using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition". The ICO also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet and to delete the data of UK residents from its systems. Since Clearview AI is not exactly a household name some background might be helpful. It's a US outfit that has "scraped" (ie digitally collected) more than 20bn images of people's faces from publicly available information on the internet and social media platforms all over the world to create an online database. The company uses this database to provide a service that allows customers to upload an image of a person to its app, which is then checked for a match against all the images in the database.
- North America > United States > New York (0.05)
- North America > United States > Indiana (0.05)
- Europe > United Kingdom > England > Northamptonshire (0.05)
- (2 more...)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.62)
- Information Technology > Communications > Social Media (0.59)
An AI Company Scraped Billions of Photos For Facial Recognition. Regulators Can't Stop It
More and more privacy watchdogs around the world are standing up to Clearview AI, a U.S. company that has collected billions of photos from the internet without people's permission. The company, which uses those photos for its facial recognition software, was fined £7.5 million ($9.4 million) by a U.K. regulator on May 26. The U.K. Information Commissioner's Office (ICO) said the firm, Clearview AI, had broken data protection law. The company denies breaking the law. But the case reveals how nations have struggled to regulate artificial intelligence across borders. Facial recognition tools require huge quantities of data.
- Europe > Italy (0.15)
- Europe > United Kingdom (0.15)
- North America > United States > Illinois (0.06)
- Europe > France (0.05)
Clearview AI Says It's Bringing Facial Recognition to Schools
Clearview AI, the surveillance firm notoriously known for harvesting some 20 billion face scans off of public social media searches, said it may bring its technology to schools and other private businesses. In an interview with Reuters on Tuesday, the company revealed it's working with a U.S. company selling visitor management systems to schools. That reveal came around the same time as a horrific shooting at Robb Elementary School in Uvalde, Texas that tragically left 19 children and two teachers dead. Though Clearview wouldn't provide more details about the education-linked companies to Gizmodo, other facial recognition competitors have spent years trying to bring the tech to schools with varying levels of success and pushback. New York state even moved to ban facial recognition in schools two years ago.
- North America > United States > Texas > Uvalde County > Uvalde (0.25)
- North America > United States > New York (0.25)
- Information Technology > Security & Privacy (1.00)
- Education (1.00)
- Law > Civil Rights & Constitutional Law (0.74)
UK fines Clearview just under $10M for privacy breaches – TechCrunch
The UK's data protection watchdog has confirmed a penalty for the controversial facial recognition company, Clearview AI -- announcing a fine of just over £7.5 million today for a string of breaches of local privacy laws. The watchdog has also issued an enforcement notice, ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet; and telling it to delete the information of UK residents from its systems. The US company has amassed a database of 20 billion facial images by scraping data off the public internet, such as from social media services, to create an online database that it uses to power an AI-based identity-matching service which it sells to entities such as law enforcement. The problem is Clearview has never asked individuals whether it can use their selfies for that. And in many countries it has been found in breach of privacy laws.
- Europe > United Kingdom (1.00)
- North America > United States > Illinois (0.07)
- Oceania > Australia (0.05)
- (3 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.58)
- Information Technology > Communications > Social Media (0.38)