"Computers have been getting better and better at seeing movement on video. How is it that they read lips, follow a dancing girl or copy an actor making faces?"
– from Andrew Blake. Introduction to Active Contours and Visual Dynamics. Visual Dynamics Group, Department of Engineering Science, University of Oxford
Consumer privacy has made big headlines in the recent years with the Facebook Cambridge Analytica Scandal, Europe's GDPR and high-profile breaches by companies like Equifax. It's clear that the data of millions of consumers is at risk every day, and that companies that wish to handle their data must do so with the highest degree of protection around both security and privacy of that data, especially for companies that build and sell AI-enabled facial recognition solutions. As CEO of an AI-enabled software company specializing in facial recognition solutions, I've made data security and privacy among my top priorities. Our pro-privacy stance goes beyond mere privacy by design engineering methodology. We regularly provide our customers with education and best practices, and we have even reached out to US lawmakers, lobbying for sensible pro-privacy regulations governing the technology we sell.
The ban comes after civil liberties groups highlighted what they described as faults in facial recognition algorithms after NIST found most facial recognition software was more likely to misidentify people of colour than white people. The Boston ban follows a ban imposed by San Francisco on the use of face recognition technology last year. The ban prevents any city employee using facial recognition or asking a third party to use the technology on its behalf. Boston's police department said it had not used the technology over what it called reliability fears, though it's clear the best systems are reasonably accurate in average working conditions. Critics also opposed the technology on the basis it might discourage citizens' rights to protest.
In a letter to congress sent on June 8th, IBM's CEO Arvind Krishna made a bold statement regarding the company's policy toward facial recognition. "IBM no longer offers general purpose IBM facial recognition or analysis software," says Krishna. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency." The company has halted all facial recognition development and disapproves or any technology that could lead to racial profiling. The ethics of face recognition technology have been in question for years. However, there has been little to no movement in the enactment of official laws barring the technology.
Detroit's police chief admitted on Monday that facial recognition technology used by the department misidentifies suspects about 96 percent of the time. It's an eye-opening admission given that the Detroit Police Department is facing criticism for arresting a man based on a bogus match from facial recognition software. Last week, the ACLU filed a complaint with the Detroit Police Department on behalf of Robert Williams, a Black man who was wrongfully arrested for stealing five watches worth $3,800 from a luxury retail store. Investigators first identified Williams by doing a facial recognition search with software from a company called DataWorks Plus. Under police questioning, Williams pointed out that the grainy surveillance footage obtained by police didn't actually look like him.
On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans. The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of "criminality" is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.
The iPad Pro is the most expensive, and the most capable tablet in the lineup. It boasts a completely different design when compared to the standard iPad or iPad Air. Instead of a Lightning port for charging, syncing and accessories, you'll find a USB-C port. The Home button is gone, replaced by Apple's Face ID facial recognition tech. And, unlike on the iPhone, you can use Face ID with the iPad in either portrait or landscape orientation.
In one second, the human eye can only scan through a few photographs. Computers, on the other hand, are capable of performing billions of calculations in the same amount of time. With the explosion of social media, images have become the new social currency on the internet. Today, Facebook and Instagram can automatically tag a user in photos, while Google Photos can group one's photos together via the people present in those photos using Google's own image recognition technology. Dealing with threats against digital privacy today, therefore, extends beyond just stopping humans from seeing the photos, but also preventing machines from harvesting personal data from images. The frontiers of privacy protection need to be extended now to include machines.
Janine Jackson interviewed the Center on Privacy and Technology's Clare Garvie about facial recognition rules for the June 26, 2020, episode of CounterSpin. This is a lightly edited transcript. Janine Jackson: Robert Williams, an African-American man in Detroit, was falsely arrested when an algorithm declared his face a match with security footage of a watch store robbery. Boston City Council voted this week to ban the city's use of facial recognition technology, part of an effort to move resources from law enforcement to community, but also out of concern about dangerous mistakes like that in Williams' case, along with questions about what the technology means for privacy and free speech. As more and more people go out in the streets and protest, what should we know about this powerful tool, and the rules--or lack thereof--governing its use?
Amazon may have banned police from using its facial recognition technology, but a new report shows the tech giant is providing thousands of departments with video and audio footage from Ring. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with the Amazon-owned company and hundreds of them have'deadly histories.' Data from sources reveals half of the agencies had at least one fatal encounter in the last five years and altogether are responsible for a third of fatal encounters nationwide. These departments are also involved with the deaths of Breonna Taylor, Alton Sterling, Botham Jean, Antonio Valenzuela, Michael Ramos and Sean Monterrosa. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with Amazon-owned Ring and hundreds of them have'deadly histories' DailyMail.com