Civil Rights & Constitutional Law


Shanghai Subway Surveillance AI Has Database of 2 Billion Faces

#artificialintelligence

The AI algorithm, the name of which can be translated as either Dragon Eye or Dragonfly Eye, was developed by Shanghai-based tech firm Yitu. It works off of China's national database, which consists of all 1.3 billion residents of the Asian nation as well as 500 million more people who have entered the country at some point. Dragon Eye interfaces with the database to detect the faces of individuals. Yitu chief executive and co-founder Zhu Long told the South China Morning Post (SCMP) that the purpose of the algorithm is to fight crime and make the world a safer place. "Let's say that we live in Shanghai, a city of 24 million people.


China unveils Minority Report-style AI security system

Daily Mail

A smart surveillance system that can identify criminals among a database of 2 billion faces within seconds has been revealed in China. The system connects to millions of CCTV cameras and uses artificial intelligence to pick out targets. Known as'Dragonfly Eye', it has already been used in Shanghai to track down hundreds of wanted criminals, reports suggest. A smart surveillance system (pictured) that can scan 2 billion faces within seconds has been revealed in China. The system has been helping Shanghai's police force track down criminals in a city with more than 24 million inhabitants.


Dragon Eye Can Recognize Face Among Billions: Crime Fighter Or Big Brother?

International Business Times

A Shanghai company has claimed to have developed an AI that can recognize a face among at least two billion people in a matter of seconds. Yitu's AI algorithm Dragon Eye not only recognizes faces but with a network of connected cameras can plot the movement of their owners. "Our machines can very easily recognize you among at least two billion people in a matter of seconds," says chief executive and Yitu co-founder Zhu Long, "which would have been unbelievable just three years ago." As of now, the Dragon Eye platform has around 1.8 billion photographs to work with: those logged in China's national database and those who have ever entered through its borders. Talking to the South China Morning Post, Zhu said the objective of the algorithm is to make the world a much safer place by curbing crime.


doctor-border-guard-policeman-artificial

#artificialintelligence

The lifts rising to Yitu Technology's headquarters have no buttons. The pass cards of the staff and visitors stepping into the elevators that service floors 23 and 25 of a newly built sky scraper in Shanghai's Hongqiao business district are read automatically – no swipe required – and each passenger is deposited at their specified floor. The only way to beat the system and alight at a different floor is to wait for someone who does have access and jump out alongside them. Or, if this were a sci-fi thriller, you'd set off the fire alarms and take the stairs while everyone else was evacuating. But even in that scenario you'd be caught: Yitu's cameras record everyone coming into the building and tracks them inside.


Can A.I. Be Taught to Explain Itself?

@machinelearnbot

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: "Advances in A.I. Are Used to Spot Signs of Sexuality." But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski's work "dangerous" and "junk science." In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: "The Invention of A.I. 'Gaydar' Could Be the Start of Something Much Worse."


New iPhone brings face recognition (and fears) to masses

Daily Mail

Apple will let you unlock the iPhone X with your face - a move likely to bring facial recognition to the masses. But along with the roll out of the technology, are concerns over how it could be used. Despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would'normalise' the technology. This could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused'. Facial recognition could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused', experts have warned.


Apple iPhone X's FaceID Technology: What It Could Mean For Civil Liberties

International Business Times

Apple's new facial recognition software to unlock their new iPhone X has raised questions about privacy and the susceptibility of the technology to hacking attacks. Apple's iPhone X is set to go on sale on Nov. 3. The world waits with bated breath as Apple plans on releasing a slew of new features including a facial scan. The new device can be unlocked with face recognition software wherein a user would be able to look at the phone to unlock it. This convenient new technology is set to replace numeric and pattern locks and comes with a number of privacy safeguards.


Stanford professor says face-reading AI will detect IQ

Daily Mail

Stanford researcher Dr Michal Kosinski went viral last week after publishing research (pictured) suggesting AI can tell whether someone is straight or gay based on photos. Stanford researcher Dr Michal Kosinki claims he is working on AI software that can identify political beliefs, with preliminary results proving positive. Dr Kosinki claims he is now working on AI software that can identify political beliefs, with preliminary results proving positive. Dr Kosinki claims he is now working on AI software that can identify political beliefs, with preliminary results proving positive.


FaceApp removes 'Ethnicity Filters' after racism storm

Daily Mail

When asked to make his picture'hot' the app lightened his skin and changed the shape of his nose The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to transform their photos Earlier this year people accused the popular photo editing app Meitu of being racist. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Twitter user Vaughan posted a picture of Kanye West with a filter applied, along with the caption: 'So Meitu's pretty racist'


Rise of the racist robots – how AI is learning all our worst impulses

#artificialintelligence

Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages. Lum and her co-author took PredPol – the program that suggests the likely location of future crimes based on recent crime and arrest statistics – and fed it historical drug-crime data from the city of Oakland's police department. As if that wasn't bad enough, the researchers also simulated what would happen if police had acted directly on PredPol's hotspots every day and increased their arrests accordingly: the program entered a feedback loop, predicting more and more crime in the neighbourhoods that police visited most.