Collaborating Authors


Why AI Needs a Social License


If business wants to use AI at scale, adhering to the technical guidelines for responsible AI development isn't enough. It must obtain society's explicit approval to deploy the technology. Six years ago, in March 2016, Microsoft Corporation launched an experimental AI-based chatbot, TayTweets, whose Twitter handle was @TayandYou. Tay, an acronym for "thinking about you," mimicked a 19-year-old American girl online, so the digital giant could showcase the speed at which AI can learn when it interacts with human beings. Living up to its description as "AI with zero chill," Tay started off replying cheekily to Twitter users and turning photographs into memes. Some topics were off limits, though; Microsoft had trained Tay not to comment on societal issues such as Black Lives Matter. Soon enough, a group of Twitter users targeted Tay with a barrage of tweets about controversial issues such as the Holocaust and Gamergate. They goaded the chatbot into replying with racist and sexually charged responses, exploiting its repeat-after-me capability. Realizing that Tay was reacting like IBM's Watson, which started using profanity after perusing the online Urban Dictionary, Microsoft was quick to delete the first inflammatory tweets. Less than 16 hours and more than 100,000 tweets later, the digital giant shut down Tay.

It's about time facial recognition tech firms took a look in the mirror John Naughton

The Guardian

Last week, the UK Information Commissioner's Office (ICO) slapped a £7.5m fine on a smallish tech company called Clearview AI for "using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition". The ICO also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet and to delete the data of UK residents from its systems. Since Clearview AI is not exactly a household name some background might be helpful. It's a US outfit that has "scraped" (ie digitally collected) more than 20bn images of people's faces from publicly available information on the internet and social media platforms all over the world to create an online database. The company uses this database to provide a service that allows customers to upload an image of a person to its app, which is then checked for a match against all the images in the database.

An AI Company Scraped Billions of Photos For Facial Recognition. Regulators Can't Stop It

TIME - Tech

More and more privacy watchdogs around the world are standing up to Clearview AI, a U.S. company that has collected billions of photos from the internet without people's permission. The company, which uses those photos for its facial recognition software, was fined £7.5 million ($9.4 million) by a U.K. regulator on May 26. The U.K. Information Commissioner's Office (ICO) said the firm, Clearview AI, had broken data protection law. The company denies breaking the law. But the case reveals how nations have struggled to regulate artificial intelligence across borders. Facial recognition tools require huge quantities of data.

Clearview AI Says It's Bringing Facial Recognition to Schools


Clearview AI, the surveillance firm notoriously known for harvesting some 20 billion face scans off of public social media searches, said it may bring its technology to schools and other private businesses. In an interview with Reuters on Tuesday, the company revealed it's working with a U.S. company selling visitor management systems to schools. That reveal came around the same time as a horrific shooting at Robb Elementary School in Uvalde, Texas that tragically left 19 children and two teachers dead. Though Clearview wouldn't provide more details about the education-linked companies to Gizmodo, other facial recognition competitors have spent years trying to bring the tech to schools with varying levels of success and pushback. New York state even moved to ban facial recognition in schools two years ago.

UK fines Clearview just under $10M for privacy breaches – TechCrunch


The UK's data protection watchdog has confirmed a penalty for the controversial facial recognition company, Clearview AI -- announcing a fine of just over £7.5 million today for a string of breaches of local privacy laws. The watchdog has also issued an enforcement notice, ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet; and telling it to delete the information of UK residents from its systems. The US company has amassed a database of 20 billion facial images by scraping data off the public internet, such as from social media services, to create an online database that it uses to power an AI-based identity-matching service which it sells to entities such as law enforcement. The problem is Clearview has never asked individuals whether it can use their selfies for that. And in many countries it has been found in breach of privacy laws.

Clearview AI settles with ACLU on face-recog database sales


Clearview AI has promised to stop selling its controversial face-recognizing tech to most private US companies in a settlement proposed this week with the ACLU. The New-York-based startup made headlines in 2020 for scraping billions of images from people's public social media pages. These photographs were used to build a facial-recognition database system, allowing the biz to link future snaps of people to their past and current online profiles. Clearview's software can, for example, be shown a face from a CCTV still, and if it recognizes the person from its database, it can return not only the URLs to that person's social networking pages, from where they were first seen, but also copies that allow that person to be identified, traced, and contacted. That same year, the ACLU sued the biz, claiming it violated Illinois' Biometric Information Privacy Act (BIPA), which requires organizations operating in the US state to obtain explicit consent from its residents to collect their biometric data, which includes their photographs.

Ukraine is scanning faces of dead Russians, then contacting the mothers

Washington Post - Technology News

But facial recognition search results are imperfect, and some experts worry that a misidentification could lead to the wrong person being told their child had died -- or in the frenzy of war, could mean the difference between life or death. Privacy International, a digital-rights group, has called on Clearview to end its work in Ukraine, saying "the potential consequences would be too atrocious to be tolerated -- such as mistaking civilians for soldiers."

How is artificial intelligence aiding war in Ukraine?


In early March, Clearview A.I. founder, Hoan Ton-That, started reaching out to people who could help him present his technology to the Ukrainian government. Clearview holds a huge database of scraped photos from multiple social media platforms like Facebook, Instagram and Twitter. The facial recognition company is already being used extensively in the U.S. According to Ton-That, the Russian invasion presented another implementation for the technology. "We saw images of people who were prisoners of war and fleeing situations," Mr Ton-That says "It got us thinking that this could potentially be a technology that could be useful for identification, and also verification." Last month, Ukrainian defence authorities began using facial recognition technology.

Ukraine begins using facial recognition to identify Russians and dead


Ukraine's defense ministry on Saturday began using Clearview AI's facial recognition technology, the company's chief executive said after the US startup offered to uncover Russian assailants, combat misinformation, and identify the dead. Ukraine is receiving free access to Clearview AI's powerful search engine for faces, letting authorities potentially vet people of interest at checkpoints, among other uses, added Lee Wolosky, an adviser to Clearview and former diplomat under US presidents Barack Obama and Joe Biden. The plans started forming after Russia invaded Ukraine and Clearview Chief Executive Hoan Ton-That sent a letter to Kyiv offering assistance, according to a copy seen by Reuters. Clearview said it had not offered the technology to Russia, which calls its actions in Ukraine a "special operation". Ukraine's Ministry of Defense did not reply to requests for comment.

Clearview, which provided free facial recognition technology to the Ukrainian government, why?


This news, which came out in the third week of Russia's invasion of Ukraine, was reported exclusively to Reuters by Clearview, and is expected to be a positive use case of AI-based facial recognition technology in extreme situations such as war. It is true that facial recognition technology has been viewed negatively due to concerns and concerns about abuse and invasion of privacy. Therefore, CIOs who were unable to introduce facial recognition technology because they were concerned about public opinion need to pay attention to Clearview and the Ukraine crisis. It remains to be seen how the Ukrainian government uses the technology to achieve its positive goals. In general, in order to train an AI to learn a face image, you must obtain permission from the user.