Goto

Collaborating Authors

Results


UK privacy watchdog fines Clearview AI £7.5m and orders UK data to be deleted

ZDNet

Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet. The Information Commissioner's Office (ICO) has fined controversial facial recognition company Clearview AI £7.5 million ($9.4 million) for breaching UK data protection laws and has issued an enforcement notice ordering the company to stop obtaining and using data of UK residents, and to delete the data from its systems. In its finding, the ICO detailed how Clearview AI failed to inform people in the UK that it was collecting their images from the web and social media to create a global online database that could be used for facial recognition; failed to have a lawful reason for collecting people's information; failed to have a process in place to stop the data being retained indefinitely; and failed to meet data protection standards required for biometric data under the General Data Protection Regulation. The ICO also found the company asked for additional personal information, including photos, when asked by members of the public if they were on their database.


UK fines Clearview just under $10M for privacy breaches – TechCrunch

#artificialintelligence

The UK's data protection watchdog has confirmed a penalty for the controversial facial recognition company, Clearview AI -- announcing a fine of just over £7.5 million today for a string of breaches of local privacy laws. The watchdog has also issued an enforcement notice, ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet; and telling it to delete the information of UK residents from its systems. The US company has amassed a database of 20 billion facial images by scraping data off the public internet, such as from social media services, to create an online database that it uses to power an AI-based identity-matching service which it sells to entities such as law enforcement. The problem is Clearview has never asked individuals whether it can use their selfies for that. And in many countries it has been found in breach of privacy laws.


Army Testing Facial Recognition in Child-Care Centers

#artificialintelligence

Live video feeds of daycare centers are common, but the Army wants to take their kid-monitoring capabilities to the next level. Under a new pilot program being rolled out at a Fort Jackson, S.C. child-care center, the military is looking for service providers to layer commercially available facial recognition and artificial intelligence (AI) over existing closed-circuit television video feeds to improve childcare and cut costs. The request for bids on the project, called Installations of the Future: Technology Pilot for Child Development Center, explained that the CCTV feeds aren't constantly monitored by humans and the pilot program will explore whether AI could fill in the gaps. "Video analytic software provides the added security of continual computer monitoring used as an addition to the human CCTV monitoring," the request for bid said. "Moreover, it provides instant notifications to staff on a wide range of important AR 190-3 monitoring parameters as events occur."


Texas sues Meta, saying it misused facial recognition data

NPR Technology

FILE photo - Texas sued Meta on Monday over misuse of biometric data, the latest round of litigation between governments and the company over privacy. FILE photo - Texas sued Meta on Monday over misuse of biometric data, the latest round of litigation between governments and the company over privacy. Texas sued Facebook parent company Meta for exploiting the biometric data of millions of people in the state - including those who used the platform and those who did not. The company, according to a suit filed by state Attorney General Ken Paxton, violated state privacy laws and should be responsible for billions of dollars in damages. The suit involves Facebook's "tag suggestions" feature, which the company ended last year, that used facial recognition to encourage users to link the photo to a friend's profile.


2021 Year in Review: Biometric and AI Litigation

#artificialintelligence

Read on for CPW's highlights of the year's most significant events concerning biometric litigation, as well as our predictions for what 2022 may bring. One of the most critical consumer privacy statutes for biometric litigation has been Illinois' Biometric Information Privacy Act ("BIPA"), which regulates the collection, processing, disclosure, and security of the biometric information of Illinois residents. BIPA protects the "biometric information" of Illinois residents, which is any information based on "biometric identifiers" that identifies a specific person--regardless of how it is captured, converted, stored, or shared. Biometric identifiers are "a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry." BIPA has found itself to be one of the most frequent targets for class actions, as it includes a private right of action with liquidated statutory damages, unlike many other data privacy statutes.


The Problem of Zombie Datasets:A Framework For Deprecating Datasets

arXiv.org Artificial Intelligence

What happens when a machine learning dataset is deprecated for legal, ethical, or technical reasons, but continues to be widely used? In this paper, we examine the public afterlives of several prominent deprecated or redacted datasets, including ImageNet, 80 Million Tiny Images, MS-Celeb-1M, Duke MTMC, Brainwash, and HRT Transgender, in order to inform a framework for more consistent, ethical, and accountable dataset deprecation. Building on prior research, we find that there is a lack of consistency, transparency, and centralized sourcing of information on the deprecation of datasets, and as such, these datasets and their derivatives continue to be cited in papers and circulate online. These datasets that never die -- which we term "zombie datasets" -- continue to inform the design of production-level systems, causing technical, legal, and ethical challenges; in so doing, they risk perpetuating the harms that prompted their supposed withdrawal, including concerns around bias, discrimination, and privacy. Based on this analysis, we propose a Dataset Deprecation Framework that includes considerations of risk, mitigation of impact, appeal mechanisms, timeline, post-deprecation protocol, and publication checks that can be adapted and implemented by the machine learning community. Drawing on work on datasheets and checklists, we further offer two sample dataset deprecation sheets and propose a centralized repository that tracks which datasets have been deprecated and could be incorporated into the publication protocols of venues like NeurIPS.


New York City's new biometrics privacy law takes effect – TechCrunch

#artificialintelligence

A new biometrics privacy ordinance has taken effect across New York City, putting new limits on what businesses can do with the biometric data they collect on their customers. From Friday, businesses that collect biometric information -- most commonly in the form of facial recognition and fingerprints -- are required to conspicuously post notices and signs to customers at their doors explaining how their data will be collected. The ordinance applies to a wide range of businesses -- retailers, stores, restaurants and theaters, to name a few -- which are also barred from selling, sharing or otherwise profiting from the biometric information that they collect. The move will give New Yorkers -- and its millions of visitors each year -- greater protections over how their biometric data is collected and used, while also serving to dissuade businesses from using technology that critics say is discriminatory and often doesn't work. Businesses can face stiff penalties for violating the law, but can escape fines if they fix the violation quickly.


Clearview AI Raises Disquiet at Privacy Regulators

WSJ.com: WSJD - Technology

The data protection authority in Hamburg, Germany, for instance, last week issued a preliminary order saying New York-based Clearview must delete biometric data related to Matthias Marx, a 32-year-old doctoral student. The regulator ordered the company to delete biometric hashes, or bits of code, used to identify photos of Mr. Marx's face, and gave it till Feb. 12 to comply. Not all photos, however, are considered sensitive biometric data under the European Union's 2018 General Data Protection Regulation. The action in Germany is only one of many investigations, lawsuits and regulatory reprimands that Clearview is facing in jurisdictions around the world. On Wednesday, Canadian privacy authorities called the company's practices a form of "mass identification and surveillance" that violated the country's privacy laws.


Canadian Regulators Say Clearview Violated Privacy Laws

WSJ.com: WSJD - Technology

Canadian regulators on Wednesday said facial-recognition-software company Clearview AI Inc. violated federal and provincial privacy laws in the country by offering its services there, though they acknowledged having limited enforcement powers in penalizing the New York-based company and others like it. Regulators said Clearview collected "highly sensitive biometric information without the knowledge or consent of individuals," affecting millions of Canadians. Clearview has a database of about 3 billion photos it scraped from the internet, allowing it to search for matches using facial recognition algorithms. The practices violated federal and provincial laws, regulators said, including in Quebec where express consent is required to use biometric data. Officials with four Canadian regulatory agencies said they completed an investigation into Clearview that began last February, finding that the company served 48 accounts for law enforcement agencies and other organizations across the country, including a paid subscription by the Royal Canadian Mounted Police.


Facebook Will Pay $650 Million to Illinois Residents - Legal Reader

#artificialintelligence

Facebook allegedly violated Illinois state law by using consumers' facial features to improve its photo-tagging software. Nearly one and a half million Illinois residents have filed claims to part of a $650 million privacy settlement offered by Facebook. According to NBC Chicago, the law firm responsible for the social media lawsuit said that 1.42 million Illinois residents have already filed claims. Eligible claimants could receive awards ranging between $200 and $400. The lawsuit, says NBC, alleged that Facebook broke Illinois' "strict biometric privacy law."