"I don't use Facebook anymore," she said. I was leading a usability session for the design of a new mobile app when she stunned me with that statement. It was a few years back, when I was a design research lead at IDEO and we were working on a service design project for a telecommunications company. The design concept we were showing her had a simultaneously innocuous and yet ubiquitous feature -- the ability to log in using Facebook. But the young woman, older than 20, less than 40, balked at that feature and went on to tell me why she didn't trust the social network any more. This session was, of course, in the aftermath of the 2016 Presidential election. An election in which a man who many regarded as a television spectacle at best and grandiose charlatan at worst had just been elected to our highest office. Though now in 2020, our democracy remains intact.
The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized).
Darryl Richardson was delighted when he landed a job as a "picker" at the Amazon warehouse in Bessemer, Alabama. "I thought, 'Wow, I'm going to work for Amazon, work for the richest man around," he said. "I thought it would be a nice facility that would treat you right." Richardson, a sturdily built 51-year-old with a short, charcoal beard, took a job at the gargantuan warehouse after the auto parts plant where he worked for nine years closed. Now he is strongly supporting the ambitious effort to unionize its 5,800 workers because, he says, the job is so demanding and working for Amazon has fallen far below his expectations. Last August, five months after the warehouse opened, Richardson began pushing for a union in what is not only the first effort to organize an entire Amazon warehouse in the United States, but also the biggest private-sector union drive in the south in years. "I thought the opportunities for moving up would be better. I thought safety at the plant would be better," Richardson said. "And when it comes to letting people go for no reason – job security – I thought it would be different."
In January 2020, Robert Williams of Farmington Hills, MI, was arrested at his home by the Detroit Police Department. He was photographed, fingerprinted, had his DNA taken, and was then locked up for 30 hours. He had not committed one; a facial recognition system operated by the Michigan State Police had wrongly identified him as the thief in a 2018 store robbery. However, Williams looked nothing like the perpetrator captured in the surveillance video, and the case was dropped. Rewind to May 2019, when Detroit resident Michael Oliver was arrested after being identified by the very same police facial recognition unit as the person who stole a smartphone from a vehicle.
But some cities have stuck by the systems. In Detroit, where the police chief said the system was useful even though it almost never returned a perfect match without human guidance, city leaders last year approved further use of the software, saying it helped protect the public while empowering the police.
A landmark police reform law passed in December is already stumbling in the State House after lawmakers and Gov. Charlie Baker this week blew past the first major deadline in the rollout. Dorchester equality activist James Mackey accused lawmakers of getting "too comfortable in their seats" for botching the start. "People advocated, they rallied, they screamed and yelled, they marched for them to pass this and now it's about accountability," Mackey said. "Now its up to us to hold these legislators accountable knowing that they haven't held their end of the bargain." A special legislative commission on law enforcement's use of facial recognition technology was mandated to meet by Monday.
If you have a Ring doorbell, your local police department may request your help in...surveilling protests? According to a report by the Electronic Frontier Foundation, recently obtained documents from the Los Angeles Police Department show that law enforcement officials attempted to obtain footage from Ring devices in order to identify participants in Black Lives Matter protests that occurred last summer. "The Los Angeles Police Department is requesting your help" reads the subject line of the emails sent to Ring users from " email@example.com," an official email address associated with the company that makes the home surveillance devices. The EFF, a digital rights advocacy group, received the LAPD's emails to a number of Ring users after submitting a public information request. In one example obtained by the EFF, an LAPD official received footage from a user no more than two hours after sending the email request on June 1, 2020, "the morning after one of the largest protests of last summer in Los Angeles."
In computer science, the main outlets for peer-reviewed research are not journals but conferences, where accepted papers are presented in the form of talks or posters. In June, 2019, at a large artificial-intelligence conference in Long Beach, California, called Computer Vision and Pattern Recognition, I stopped to look at a poster for a project called Speech2Face. Using machine learning, researchers had developed an algorithm that generated images of faces from recordings of speech. A neat idea, I thought, but one with unimpressive results: at best, the faces matched the speakers' sex, age, and ethnicity--attributes that a casual listener might guess. That December, I saw a similar poster at another large A.I. conference, Neural Information Processing Systems (NeurIPS), in Vancouver, Canada.
This article is part of the Free Speech Project, a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the ways technology is influencing how we think about speech. Last summer's anti–police brutality protests represented the largest mass demonstration effort in American history. Since then, law enforcement departments nationwide have faced intense scrutiny for how they policed these historic protests. The repeated, egregious instances of violence against journalists and protesters are well documented and have driven widespread calls for systematic reform. These calls have focused in part on surveillance, after the police used sophisticated social media data monitoring, commandeered non-city camera networks, and tried other intrusive methods to identify suspects.
In science fiction, facial recognition technology is a hallmark of a dystopian society. The truth of how it was created, and how it's used today, is just as freaky. In a new study, researchers conduct a historical survey of over 100 data sets used to train facial recognition systems compiled over the last 43 years. The broadest revelation is that, as the need for more data (i.e. Researchers Deborah Raji of Mozilla and Genevieve Fried of AI Now published the study on Cornell University's free distribution service, arXiv.org.