Some of the biggest companies in the world are pulling their facial recognition technologies from law enforcement agencies across the country. Amazon (AMZN), IBM (IBM), and Microsoft (MSFT) have said that they will either put a moratorium on the use of their technology by police -- or are completely exiting the field citing human rights concerns. The technology, which can be used to identify suspects in things like surveillance footage, has faced widespread criticism after studies found it can be biased against women and people of color. And according to at least one expert, there needs to be some form of regulation put in place if these technologies are going to be used by law enforcement agencies. "If these technologies were to be deployed, I think you cannot do it in the absence of legislation," explained Siddharth Garg, assistant professor of computer science and engineering at NYU Tandon School of Engineering, told Yahoo Finance.
When it comes to your Messenger inbox, Facebook thinks that only you and Facebook should have access to the theoretically private conversations contained within. To that end, reports Engadget, Facebook is testing new ways to secure its app. Specifically, on an unspecified number of iOS devices, the social-media giant has added a second layer of protection to Messenger's inbox. If enabled, users will need to either re-enter their passcode, or engage Touch ID or Face ID before they can read all their juicy messages. The idea behind the change is simple: If someone gets access to your unlocked device, this security feature provides an additional barrier that will prevent the bad actor from reading your Messenger messages.
Facebook is testing a new feature for Messenger that allows users to better protect their messages from prying eyes. When enabled, users will need to authenticate their identity using Face ID, Touch ID, or their passcode before they can view their inbox, even if their phone is already unlocked. You can also set a designated period of time after leaving the app for when you'll need to re-authenticate. The company is currently testing the new security feature among a small percentage of Messenger's iOS users, though it could eventually be available more widely, including on Android. "We want to give people more choices and controls to protect their private messages, and recently, we began testing a feature that lets you unlock the Messenger app using your device's settings," a Facebook spokesperson said in a statement.
Researchers have been harvesting selfies of people wearing protective masks from social platforms like Instagram in an effort to improve facial recognition software. An investigation by CNET uncovered thousands of selfies depicting people wearing masks in public data sets found online. The images had been harvested directly from Instagram. The sets are being used to help train facial recognition software to identify people who are wearing protective face masks as a safeguard against spreading COVID-19. Masks covering a significant portion of a person's face prevent even some of the most advanced facial recognition software like Apple's Face ID, from accurately detecting a person's features.
In recent months, Clearview AI has been attacked from all sides by lawmakers, tech giants, and privacy advocates for its business practices, which include scraping public images of people from sites like LinkedIn, Venmo, Facebook, and YouTube. Clearview AI's systems then allow clients to search for people in its database using these scraped images. While several law enforcement agencies are known to use Clearview AI's services, the breach of its entire client list may shed some embarrassment on other organizations who are clients of the company that wish to remain unknown. As of now, however, it looks like Clearview AI's client list hasn't been made public--at least not yet. Clearview AI made the disclosure of the breach in an email to clients, saying an intruder "gained unauthorized access" to the client list.
"Scraping people's information violates our policies, which is why we've demanded that Clearview stop accessing or using information from Facebook or Instagram," a Facebook spokesperson said in an email to Fast Company. The previously little-known company drew national attention last month after an article by New York Times reporter Kashmir Hill revealed that the company claimed to have scraped billions of photos from services including Facebook, YouTube, and Venmo to match against people of interest to law enforcement. Twitter, YouTube parent Google, and Venmo have also reportedly told the startup to stop accessing data from their sites, saying it violates their policies. Whether they can legally enforce those rules may be uncertain: The Ninth Circuit Court of Appeals ruled in September that a company scraping LinkedIn in violation of the social site's policies likely didn't violate the Computer Fraud and Abuse Act, a key federal anti-hacking law. Clearview didn't immediately respond to an inquiry from Fast Company.
The European Commission is in consultation with EU data protection authorities following the news that US technology firm Clearview AI has scraped more than three billion facial images from social media sites including YouTube, Facebook and Twitter, without obtaining the permission of users. It has also transpired following an investigation by Buzzfeed news that the company wants to expand its service to the European market, with nine European countries including Italy, Greece, and the Netherlands as potential partners. Meanwhile, EURACTIV has been informed by a US official that Clearview AI is not a member of the 2016 EU-US Privacy Shield agreement, which obliges American companies to protect personal data belonging to EU citizens, according to EU standards and consumer rights. Clearview AI has not as yet disclosed whether any of the images have been harvested from EU citizens. If this were to be the case, the software may violate the EU's General Data Protection Regulation, Article 4 (14) of which covers the processing of biometric data.
Hoan Ton-That, CEO of creepy facial recognition company Clearview AI, made the bold claim on Tuesday that his company has the right to publicly posted photos on Twitter and wielded the First Amendment as his reason. Clearview AI faced heat after it was discovered they had mined billions of publicly accessible images from Facebook and Ton-That's comments prove the company isn't backing down. EXCLUSIVE: The founder of a facial recognition company described as both "groundbreaking" and "a nightmare" is speaking out. In an interview with CBS This Morning, Ton-That was asked about Twitter's cease-and-desist order requesting that his company stop scraping it's data and delete everything Clearview AI has collected from the platform. In response, the facial recognition CEO claimed his company has a first amendment right to the data.
Clearview AI CEO Hoan Ton-That tells CBS correspondent Errol Barnett that the First Amendment allows his company to scrape the internet for people's photos. Google and YouTube have sent a cease-and-desist letter to Clearview AI, the facial recognition company that has been scraping billions of photos off the internet and using it to help more than 600 police departments identify people within seconds. That follows a similar action by Twitter, which sent Clearview AI a cease-and-desist letter for its data scraping in January. The letter from Google-owned YouTube was first seen by CBS News. (Note: CBS News and CNET share the same parent company, ViacomCBS.) The CEO of Clearview AI, a controversial and secretive facial recognition startup, is defending his company's massive database of searchable faces, saying in an interview on CBS This Morning Wednesday that it's his First Amendment right to collect public photos.