Goto

Collaborating Authors

Regulator looking at use of facial recognition at King's Cross site

The Guardian

The UK's privacy regulator said it is studying the use of controversial facial recognition technology by property companies amid concerns that its use in CCTV systems at the King's Cross development in central London may not be legal. The Information Commissioner's Office warned businesses using the surveillance technology that they needed to demonstrate its use was "strictly necessary and proportionate" and had a clear basis in law. The data protection regulator added it was "currently looking at the use of facial recognition technology" by the private sector and warned it would "consider taking action where we find non-compliance with the law". On Monday, the owners of the King's Cross site confirmed that facial recognition software was used around the 67-acre, 50-building site "in the interest of public safety and to ensure that everyone who visits has the best possible experience". It is one of the first landowners or property companies in Britain to acknowledge deploying the software, described by a human rights pressure group as "authoritarian", partly because it captures images of people without their consent.


Details emerge of King's Cross facial-ID tech

#artificialintelligence

King's Cross Central's developers said they wanted facial-recognition software to spot people on the site who had previously committed an offence there. The detail has emerged in a letter one of its managers sent to the London mayor, on 14 August. Sadiq Khan had sought reassurance using facial recognition on the site was legal. Two days before, Argent indicated it was using it to "ensure public safety". On Monday, it said it had now scrapped work on new uses of the technology.


Facial recognition technology scrapped at King's Cross site

The Guardian

Facial recognition technology will not be deployed at the King's Cross development in the future, following a backlash prompted by the site owner's admission last month that the software had been used in its CCTV systems. The developer behind the prestigious central London site said the surveillance software had been used between May 2016 and March 2018 in two cameras on a busy pedestrian street running through its heart. It said it had abandoned plans for a wider deployment across the 67-acre, 50-building site and had "no plans to reintroduce any form of facial recognition technology at the King's Cross Estate". The site became embroiled in the debate about the ethics of facial recognition three weeks ago after releasing a short statement saying its cameras "use a number of detection and tracking methods, including facial recognition". That made it one of the first landowners to acknowledge it was deploying the software, described by human rights groups as authoritarian, partly because it captures and analyses images of people without their consent.


These glasses trick facial recognition software into thinking you're someone else

#artificialintelligence

Facial recognition software has become increasingly common in recent years. Facebook uses it to tag your photos; the FBI has a massive facial recognition database spanning hundreds of millions of images; and in New York, there are even plans to add smart, facial recognition surveillance cameras to every bridge and tunnel. But while these systems seem inescapable, the technology that underpins them is far from infallible. In fact, it can be beat with a pair of psychedelic-looking glasses that cost just $0.22. Researchers from Carnegie Mellon University have shown that specially designed spectacle frames can fool even state-of-the-art facial recognition software.


Facial recognition software is biased towards white men, researcher finds

#artificialintelligence

New research out of MIT's Media Lab is underscoring what other experts have reported or at least suspected before: facial recognition technology is subject to biases based on the data sets provided and the conditions in which algorithms are created. Joy Buolamwini, a researcher at the MIT Media Lab, recently built a dataset of 1,270 faces, using the faces of politicians, selected based on their country's rankings for gender parity (in other words, having a significant number of women in public office). Buolamwini then tested the accuracy of three facial recognition systems: those made by Microsoft, IBM, and Megvii of China. The results, which were originally reported in The New York Times, showed inaccuracies in gender identification dependent on a person's skin color. Gender was misidentified in less than one percent of lighter-skinned males; in up to seven percent of lighter-skinned females; up to 12 percent of darker-skinned males; and up to 35 percent in darker-skinner females.