Plotting

Results


Facial recognition cameras in Southern Co-Op stores are 'adding customers to watch-lists'

Daily Mail - Science & tech

Co-Op is facing a legal challenge to its'Orwellian' and'unlawful' use of facial recognition cameras. Privacy rights group Big Brother Watch claimed supermarket staff could add people to a secret'blacklist' without them knowing. But Co-Op says it is using the Facewatch system in shops with a history of crime, so it can protect its staff. Big Brother Watch said the independent grocery chain had installed the surveillance technology in 35 stores across Portsmouth, Bournemouth, Bristol, Brighton and Hove, Chichester, Southampton and London. It claimed staff could add individuals to a watch-list where their biometric information is kept for up to two years.


UN Human Rights Committee expected to question Ireland's plans for facial recognition

#artificialintelligence

Irish officials may be questioned over the country's plans for facial recognition technology for surveillance during a session with the United Nations Human Rights Committee in Geneva this week. The Irish Council for Civil Liberties (ICCL) has submitted a Shadow Report on what they determine as gaps between the International Covenant on Civil and Political Rights and the reality in Ireland, plus recommendations to rectify them. The group also blames Irish authorities for failing to uphold GDPR, thus allowing surveillance to remain business as usual for digital companies worldwide. The ICCL report, an alternative to the report submitted by the Irish state, is endorsed by 37 organizations and has identified gaps across areas such as the right to a fair trial and freedom from torture, as well as three breaches involving police surveillance and six across data protection. The UN Human Rights Committee meets every four years and countries are invited in turn to defend their human rights provision.


UK privacy watchdog fines Clearview AI £7.5m and orders UK data to be deleted

ZDNet

Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet. The Information Commissioner's Office (ICO) has fined controversial facial recognition company Clearview AI £7.5 million ($9.4 million) for breaching UK data protection laws and has issued an enforcement notice ordering the company to stop obtaining and using data of UK residents, and to delete the data from its systems. In its finding, the ICO detailed how Clearview AI failed to inform people in the UK that it was collecting their images from the web and social media to create a global online database that could be used for facial recognition; failed to have a lawful reason for collecting people's information; failed to have a process in place to stop the data being retained indefinitely; and failed to meet data protection standards required for biometric data under the General Data Protection Regulation. The ICO also found the company asked for additional personal information, including photos, when asked by members of the public if they were on their database.


UK fines Clearview just under $10M for privacy breaches – TechCrunch

#artificialintelligence

The UK's data protection watchdog has confirmed a penalty for the controversial facial recognition company, Clearview AI -- announcing a fine of just over £7.5 million today for a string of breaches of local privacy laws. The watchdog has also issued an enforcement notice, ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet; and telling it to delete the information of UK residents from its systems. The US company has amassed a database of 20 billion facial images by scraping data off the public internet, such as from social media services, to create an online database that it uses to power an AI-based identity-matching service which it sells to entities such as law enforcement. The problem is Clearview has never asked individuals whether it can use their selfies for that. And in many countries it has been found in breach of privacy laws.


Ex-Apple employee takes Face ID privacy complaint to Europe – TechCrunch

#artificialintelligence

Privacy watchdogs in Europe are considering a complaint against Apple made by a former employee, Ashley Gjøvik, who alleges the company fired her after she raised a number of concerns, internally and publicly, including over the safety of the workplace. Gjøvik, a former senior engineering program manager at Apple, was fired from the company last September after she had also raised concerns about her employer's approach towards staff privacy, some of which were covered by the Verge in a report in August 2021. At the time, Gjøvik had been placed on administrative leave by Apple after raising concerns about sexism in the workplace, and a hostile and unsafe working environment which it had said it was investigating. She subsequently filed complaints against Apple with the US National Labor Relations Board. Those earlier complaints link to the privacy complaint she's sent to international oversight bodies now because Gjøvik says she wants scrutiny of Apple's privacy practices after it formally told the US government its reasons for firing her -- and "felt comfortable admitting they'd fire employees for protesting invasions of privacy", as she puts it -- accusing Apple of using her concerns over its approach to staff privacy as a pretext to terminate her for reporting wider safety concerns and organizing with other employees about labor concerns. A spokesperson for the ICO told TechCrunch: "We are aware of this matter and we will assess the information provided."


French regulator tells Clearview AI to delete its facial recognition data

#artificialintelligence

France's foremost privacy regulator has ordered Clearview AI to delete all its data relating to French citizens, as first reported by TechCrunch. In its announcement, the French agency CNIL argued that Clearview had violated the GDPR in collecting the data and violated various other data access rights in its processing and storage. As a result, CNIL is calling on Clearview to purge the data from its systems or face escalating fines as laid out by European privacy law. Clearview rose to prominence in 2020 after a New York Times investigation highlighted the company's massive data collection efforts. In particular, the company offered the unique ability to identify subjects by name, drawing on data scraped from public-facing social networks.


UK's data privacy watchdog may fine Clearview AI £17m

#artificialintelligence

Clearview AI, the controversial startup known for scraping billions of selfies from people's public social network profiles to train a facial-recognition system, may be fined just over £17m ($22.6m) by the UK's Information Commissioner's Office (ICO). The watchdog on Monday publicly mulled punishing Clearview following an investigation launched last year with the Australian Information Commissioner. The ICO believes the US biz broke Britain's data-protection rules by, among other things, failing to have a "lawful reason" for collecting people's personal photos and info, and not being transparent about how the data was used and stored for its facial-recognition applications. Clearview harvests people's photos – 10 billion or more, it's thought – from their public social media profiles, and then builds a face-matching system so that if, say, the police upload a picture of someone from a CCTV still, the software can locate that person in its database and provide officers the corresponding name and online profiles. The images in Clearview AI Inc's database are likely to include the data of a substantial number of people from the UK and may have been gathered without people's knowledge from publicly available information online, including social media platforms.


London's Met Police is expanding its use of facial recognition technology

#artificialintelligence

The UK's biggest police force is set to significantly expand its facial recognition capabilities before the end of this year. New technology will enable London's Metropolitan Police to process historic images from CCTV feeds, social media and other sources in a bid to track down suspects. But critics warn the technology has "eye-watering possibilities for abuse" and may entrench discriminatory policing. In a little-publicised decision made at the end of August, the Mayor of London's office approved a proposal allowing the Met to boost its surveillance technology. The proposal says that in the coming months the Met will start using Retrospective Facial Recognition (RFR), as part of a £3 million, four-year deal with Japanese tech firm NEC Corporation.


Live facial recognition technology creates 'supercharged CCTV' that could be used recklessly, Information Commission warns

The Independent - Tech

Plans to allow CCTV cameras to recognise people's faces in realtime could be used "inappropriately, excessively or even recklessly", the Information Commissioner has warned. In recent years, authorities have been rolling out new kinds of facial recognition, with the promise that it would be able to spot dangerous people in real-time. But privacy activists and others have warned that it is a vast invasion of privacy, could be used to create watchlists of people, might falsely accuse people because of racism and other biases and unfair practices. There is still time for authorities to change their mind and avoid the vast dangers that the technology could produce, the head of the watchdog warned. "We're at a crossroads right now, we in the UK and other countries around the world see the deployment of live facial recognition and I think it's still at an early enough stage that it's not too late to put the genie back in the bottle," Commissioner Elizabeth Denham told the PA news agency.


Europe's AI rules open door to mass use of facial recognition, critics warn

#artificialintelligence

The EU is facing a backlash over new AI rules that allow for limited use of facial recognition by authorities -- with opponents warning the carveouts could usher in a new age of biometric surveillance. A coalition of digital rights and consumer protection groups across the globe, including Latin America, Africa and Asia are calling for a global ban on biometric recognition technologies that enable mass and discriminatory surveillance by both governments and corporations. In an open letter, 170 signatories in 55 countries argue that the use of technologies like facial recognition in public places goes against human rights and civil liberties. "It shows that organizations, groups, people, activists, technologists around the world who are concerned with human rights, agree to this call," said Daniel Leufer of U.S. digital rights group Access Now, which co-authored the letter. The use of facial recognition technology is becoming widespread.