On Tuesday, in an 8-1 tally, the San Francisco Board of Supervisors voted to ban the use of facial recognition software by city departments, including police. Supporters of the ban cited racial inequality in audits of facial recognition software from companies like Amazon and Microsoft, as well as dystopian surveillance happening now in China. At the core of arguments around the regulation of facial recognition software use is the question of whether a temporary moratorium should be put in place until police and governments adopt policies and standards or it should be permanently banned. Some believe facial recognition software can be used to exonerate the innocent and that more time is needed to gather information. Others, like San Francisco Supervisor Aaron Peskin, believe that even if AI systems achieve racial parity, facial recognition is a "uniquely dangerous and oppressive technology."
Ocado and Google DeepMind executives are among a cohort of experts that have been called to advise the Government on how to boost the use of artificial intelligence in Britain. Paul Clarke, chief technology officer of the e-commerce company, and DeepMind co-founder Mustafa Suleyman will join the new lineup of the Government's AI council, an advisory group set up as part of a push to boost investment in the technology. Mastercard vice chairman Ann Cairns, Amazon machine learning director Neil Lawrence and Microsoft research lab director Chris Bishop are also among those who gained seats on the new council. The executives are expected to promote the use of AI by businesses in the UK and advise the Government about future public investments in the industry. The Government already set aside £3m for AI projects aimed at boosting productivity in financial and legal services last year as part of this effort.
Police departments across the nation are generating leads and making arrests by feeding celebrity photos, CGI renderings, and manipulated images into facial recognition software. Often unbeknownst to the public, law enforcement is identifying suspects based on "all manner of'probe photos,' photos of unknown individuals submitted for search against a police or driver license database," a study published on Thursday by the Georgetown Law Center on Privacy and Technology reported. The new research comes on the heels of a landmark privacy vote on Tuesday in San Francisco, which is now the first US city to ban the use of facial recognition technology by police and government agencies. A recent groundswell of opposition has led to the passage of legislation that aims to protect marginalized communities from spy technology. These systems "threaten to fundamentally change the nature of our public spaces," said Clare Garvie, author of the study and senior associate at the Georgetown Law Center on Privacy and Technology.
Here's what you need to know in business news. The city's Board of Supervisors voted on Tuesday to prohibit the use of facial recognition technology within city limits. It's a somewhat symbolic move: The police there don't currently use the stuff, and the places where it is in use -- seaports and airports -- are under federal jurisdiction and therefore unaffected by the new regulation. The major television networks tried to sell their fall advertising slots in an annual pageant known as the upfronts. In a week of star-studded presentations, skits and boozy mingling, representatives of major advertisers flocked to New York to see what the networks have in store.
How would you feel being watched, tracked and identified by facial recognition cameras everywhere you go? Facial recognition cameras are now creeping onto the streets of Britain and the U.S., yet most people aren't even aware. As we walk around, our faces could be scanned and subjected to a digital police line up we don't even know about. There are over 6 million surveillance cameras in the U.K. – more per citizen than any other country in the world, except China. In the U.K., biometric photos are taken and stored of people whose faces match with criminals – even if the match is incorrect. As director of the U.K. civil liberties group Big Brother Watch, I have been investigating the U.K. police's "trials" of live facial recognition surveillance for several years.
Want to feel really depressed about the likely impact of climate change? AI can help with that. A new research paper shows how machine-learning trickery can highlight the ravages of climate change--by revealing how a property is likely to be harmed by rising sea levels, fiercer storms, and other disasters that it's expected to worsen. Changes afoot: The researchers used an increasingly popular technique to automatically conjure up submerged and damaged properties. As they write in their paper: "The eventual goal of our project is to enable individuals to make more informed choices about their climate future by creating a more visceral understanding of the effects of climate change."
A new report details what privacy experts are calling a dangerous misapplication of facial recognition that uses photos of celebrities and digitally-doctored images to comb for criminals. According to a detailed investigation by Georgetown Law's Center on Privacy and Technology, one New York Police Department detective attempted to identify a suspect by scanning the face of actor Woody Harrelson. After footage from a security camera failed to produce results in a facial recognition scan, the detective used Google images of what he concluded to be the suspects celebrity doppelganger -- Woody Harrelson -- to run a test. The system turned up a match, says the report, who was eventually arrested on charges of petit larceny. In a new report from Georgetown University, an investigation shows that police have used celebrities to help its facial recognition software identify suspects.
San Francisco is on track to become the first U.S. city to ban the use of facial recognition by police and other city agencies, reflecting a growing backlash against a technology that's creeping into airports, motor vehicle departments, stores, stadiums and home security cameras. Government agencies around the U.S. have used the technology for more than a decade to scan databases for suspects and prevent identity fraud. But recent advances in artificial intelligence have created more sophisticated computer vision tools, making it easier for police to pinpoint a missing child or protester in a moving crowd or for retailers to analyze a shopper's facial expressions as they peruse store shelves. Efforts to restrict its use are getting pushback from law enforcement groups and the tech industry, though it's far from a united front. Microsoft, while opposed to an outright ban, has urged lawmakers to set limits on the technology, warning that leaving it unchecked could enable an oppressive dystopia reminiscent of George Orwell's novel "1984."
Artificial intelligence (AI) algorithms are generally hungry for data, a trend which is accelerating. A new breed of AI approaches, called lifelong learning machines, are being designed to pull data continually and indefinitely. But this is already happening with other AI approaches, albeit with human intervention. A steady stream of data is the fuel for coveted results. But, with the ever-increasing importance of data, the stakes of data bias are growing ever higher.