Microsoft has said it turned down a request from law enforcement in California to use its facial recognition technology in police body cameras and cars, reports Reuters. Speaking at an event at Stanford University, Microsoft president Brad Smith said the company was concerned that the technology would disproportionately affect women and minorities. Past research has shown that because facial recognition technology is trained primarily on white and male faces, it has higher error rates for other individuals. "Anytime they pulled anyone over, they wanted to run a face scan," said Smith of the unnamed law enforcement agency. "We said this technology is not your answer."
The ACLU and other groups urged Amazon to halt selling facial recognition technology to law enforcement departments. Lending tools charge higher interest rates to Hispanics and African Americans. Job hunting tools favor men. Negative emotions are more likely to be assigned to black men's faces than white men. Computer vision systems for self-driving cars have a harder time spotting pedestrians with darker skin tones.
The history of AI is often told as the story of machines getting smarter over time. What's lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies. In this six-part series, we explore that human history of AI--how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of super-intelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are. In the 1970s, Dr. Geoffrey Franglen of St. George's Hospital Medical School in London began writing an algorithm to screen student applications for admission.
The Chinese government is using facial-recognition software to "track and control" a predominantly Muslim minority group, according to a disturbing new report from The New York Times. The Chinese government has reportedly integrated artificial intelligence into its security cameras to identify the Uighurs and appears to be using the information to monitor the persecuted group. The report, based on the accounts of whistleblowers familiar with the systems and a review of databases used by the government and law enforcement, suggests the authoritarian country has opened up a new frontier in the use of A.I. for racist social control--and raises the discomfiting possibility that other governments could adopt similar practices. Two people familiar with the matter told the Times that police in the Chinese city of Sanmenxia screened whether residents were Uighurs 500,000 times in a single month. Documents provided to the paper reportedly show demand for the technology is ballooning: more than 20 departments in 16 provinces sought access to the camera system, in one case writing that it "should support facial recognition to identify Uighur/non-Uighur attributes."
Most of the time, machine learning does not touch on particularly sensitive social, moral, or ethical issues. Someone gives us a data set and asks us to predict house prices based on given attributes, classifying pictures into different categories, or teaching a computer the best way to play PAC-MAN -- what do we do when we are asked to base predictions of protected attributes according to anti-discrimination laws? How do we ensure that we do not embed racist, sexist, or other potential biases into our algorithms, be it explicitly or implicitly? It may not surprise you that there have been several important lawsuits in the United States on this topic, possibly the most notably one involving Northpointe's controversial COMPAS -- Correctional Offender Management Profiling for Alternative Sanctions -- software, which predicts the risk that a defendant will commit another crime. The proprietary algorithm considers some of the answers from a 137-item questionnaire to predict this risk.
BEIJING - People.cn, the online unit of China's influential People's Daily, is boosting its numbers of human internet censors backed by artificial intelligence to help firms vet content on apps and adverts, capitalizing on its unmatched Communist Party lineage. Demand for online censoring services provided by the Shanghai-listed People.cn has soared since last year after China tightened its already strict online censorship rules. As a unit of the People's Daily -- the ruling Communist Party's mouthpiece -- it is seen by clients as the go-to online censor. Investors concur, lifting shares in People.cn "The biggest advantage of People.cn is its precise grasp of policy trends," said An Fushuang, an independent analyst based in Shenzhen.
The Montreal Institute for Genocide and Human Rights Studies (MIGS) is organizing the Human Rights and Artificial Intelligence Forum on April 5. The event will take place at Concordia's 4TH SPACE, an innovative and immersive venue for state-of-the-art installations, which will permit leading experts from around the world to gather to discuss this emerging technology's implication for human rights. MIGS has convened thought leaders and practitioners with the goal of understanding how new technologies are disrupting global affairs. MIGS has worked with Global Affairs Canada and Tech Against Terrorism to explore how artificial intelligence (AI) can counter online extremism and how non-state actors might use AI for nefarious purposes. MIGS has also presented work on AI at the Hague Digital Diplomacy Camp organized by the Dutch Foreign Ministry.
Machine learning algorithms process vast quantities of data and spot correlations, trends and anomalies, at levels far beyond even the brightest human mind. But as human intelligence relies on accurate information, so too do machines. Algorithms need training data to learn from. This training data is created, selected, collated and annotated by humans. And therein lies the problem.
Walking around without being constantly identified by AI could soon be a thing of the past, legal experts have warned. The use of facial recognition software could signal the end of civil liberties if the law doesn't change as quickly as advancements in technology, they say. Software already being trialled around the world could soon be adopted by companies and governments to constantly track you wherever you go. Shop owners are already using facial recognition to track shoplifters and could soon be sharing information across a broad network of databases, potentially globally. Previous research has found that the technology isn't always accurate, mistakenly identifying women and individuals with darker shades of skin as the wrong people.
US Senators Roy Blunt and Brian Schatz want to protect people's facial recognition data and make it much harder to sell now that information is treated as currency. The lawmakers have introduced the bipartisan Commercial Facial Recognition Privacy Act of 2019, which prohibits companies from collecting and resharing face data for identifying or tracking purposes without people's consent. The Senators have conjured up the bill because while facial recognition has been used for security and surveillance for decades, it's "now being developed at increasing rates for commercial applications." They argue that a lot of people aren't aware that the technology is being used in public spaces and that companies can collect identifiable info to share or sell to third parties -- similar to how carriers have been selling location data to bounty hunters for years. In addition to prohibiting companies from redistributing or disseminating data, the bill would also require them to notify customers whenever facial recognition is in use.