Civil Rights & Constitutional Law


UK's controversial use of face recognition to be challenged in court

New Scientist

The first legal battle in the UK over police use of face recognition technology will begin today. Ed Bridges has crowdfunded action against South Wales Police over claims that the use of the technology on him was an unlawful violation of privacy. He will also argue it breaches data protection and equality laws during a three-day hearing at Cardiff Civil Justice and Family Centre. Face recognition technology maps faces in a crowd then compares results with a "watch list" of images which can include suspects, missing people and persons of interest. Police who have trialled the technology hope it can help tackle crime but campaigners argue it breaches privacy and civil liberty.


Police Are Feeding Celebrity Photos into Facial Recognition Software to Solve Crimes

#artificialintelligence

Police departments across the nation are generating leads and making arrests by feeding celebrity photos, CGI renderings, and manipulated images into facial recognition software. Often unbeknownst to the public, law enforcement is identifying suspects based on "all manner of'probe photos,' photos of unknown individuals submitted for search against a police or driver license database," a study published on Thursday by the Georgetown Law Center on Privacy and Technology reported. The new research comes on the heels of a landmark privacy vote on Tuesday in San Francisco, which is now the first US city to ban the use of facial recognition technology by police and government agencies. A recent groundswell of opposition has led to the passage of legislation that aims to protect marginalized communities from spy technology. These systems "threaten to fundamentally change the nature of our public spaces," said Clare Garvie, author of the study and senior associate at the Georgetown Law Center on Privacy and Technology.


Britain Has More Surveillance Cameras Per Person Than Any Country Except China. That's a Massive Risk to Our Free Society

TIME - Tech

How would you feel being watched, tracked and identified by facial recognition cameras everywhere you go? Facial recognition cameras are now creeping onto the streets of Britain and the U.S., yet most people aren't even aware. As we walk around, our faces could be scanned and subjected to a digital police line up we don't even know about. There are over 6 million surveillance cameras in the U.K. – more per citizen than any other country in the world, except China. In the U.K., biometric photos are taken and stored of people whose faces match with criminals – even if the match is incorrect. As director of the U.K. civil liberties group Big Brother Watch, I have been investigating the U.K. police's "trials" of live facial recognition surveillance for several years.


San Francisco Approves Ban On Government's Use Of Facial Recognition Technology

NPR Technology

In this Oct. 31 photo, a man has his face painted to represent efforts to defeat facial recognition. It was during a protest at Amazon headquarters over the company's facial recognition system. In this Oct. 31 photo, a man has his face painted to represent efforts to defeat facial recognition. It was during a protest at Amazon headquarters over the company's facial recognition system. San Francisco has become the first U.S. city to ban the use of facial recognition technology by police and city agencies.


Will Artificial Intelligence Help Improve Prisons?

#artificialintelligence

Artificial intelligence–connected sensors, tracking wristbands, and data analytics: We've seen this type of tech pop up in smart homes, cars, classrooms, and workplaces. And now, we're seeing these types of networked systems show up in a new frontier--prisons. Specifically, China and Hong Kong have recently announced that their governments are rolling out new artificial intelligence (AI) technology aimed at monitoring inmates in some prisons every minute of every day. In Hong Kong, the government is testing Fitbit-like devices to monitor individuals' locations and activities, including their heart rates, at all times. Some prisons will also start using networked video surveillance systems programmed to identify abnormal behavior, such as self-harm or violence against others.


Teaching AI, Ethics, Law and Policy

arXiv.org Artificial Intelligence

The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.


Countering the Negative Image of Women in Computing

Communications of the ACM

Despite increased knowledge about gender (in) equality,7,27,38 women in STEM disciplines are still portrayed in stereotypical ways in the popular media. We have reviewed academic research, along with mainstream media quotes and images for depictions of women in STEM and women in computing/IT. We found their personality and identity formation continues to be influenced by the personas and stereotypes associated with role images seen in the media. This, in turn, can affect women's underrepresentation and career participation, as well as prospects for advancement in computing fields. The computer science Degree Hub15 in 2014 published its list of the 30 most influential, living computer scientists, weighing leadership, applicability, awards, and recognition as selection criteria. The list included only one female, Sophie Wilson, a British computer scientist best known for designing the Acorn Micro-Computer, the first computer sold by Acorn Computers Ltd. in 1978. A fellow elected to the prestigious Royal Society, Wilson is today the Director of IC Design at Broadcom Inc. in Cambridge, U.K., listed as number 30 of the 30 on the list.


Microsoft denied police facial recognition tech over human rights concerns

#artificialintelligence

Microsoft has said it turned down a request from law enforcement in California to use its facial recognition technology in police body cameras and cars, reports Reuters. Speaking at an event at Stanford University, Microsoft president Brad Smith said the company was concerned that the technology would disproportionately affect women and minorities. Past research has shown that because facial recognition technology is trained primarily on white and male faces, it has higher error rates for other individuals. "Anytime they pulled anyone over, they wanted to run a face scan," said Smith of the unnamed law enforcement agency. "We said this technology is not your answer."


The problem with AI? Study says it's too white and male, calls for more women, minorities

USATODAY

The ACLU and other groups urged Amazon to halt selling facial recognition technology to law enforcement departments. Lending tools charge higher interest rates to Hispanics and African Americans. Job hunting tools favor men. Negative emotions are more likely to be assigned to black men's faces than white men. Computer vision systems for self-driving cars have a harder time spotting pedestrians with darker skin tones.


Untold History of AI: Algorithmic Bias Was Born in the 1980s

IEEE Spectrum Robotics Channel

The history of AI is often told as the story of machines getting smarter over time. What's lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies. In this six-part series, we explore that human history of AI--how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of super-intelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are. In the 1970s, Dr. Geoffrey Franglen of St. George's Hospital Medical School in London began writing an algorithm to screen student applications for admission.