racial bias


Like any other data tech, 'garbage in, garbage out' hurts artificial intelligence

#artificialintelligence

Artificial intelligence and machine learning are leapfrog technologies, transforming the way payment processors, banks, online businesses and others are interacting with current and prospective customers. They far exceed human intelligence and intuition, and can verify the identity of the person on the other end of an online transaction and detect fraud. However, there are limitations, shortcomings and misapplications of data science that can impact results. To produce reliable and accurate decisions, AI and machine learning depend on three basic elements. The first is good data.


Amazon needs to come clean about racial bias in its algorithms

#artificialintelligence

Yesterday, Amazon's quiet Rekognition program became very public, as new documents obtained by the ACLU of Northern California showed the system partnering with the city of Orlando and police camera vendors like Motorola Solutions for an aggressive new real-time facial recognition service. Amazon insists that the service is a simple object-recognition tool and will only be used for legal purposes. But even if we take the company at its word, the project raises serious concerns, particularly around racial bias. Facial recognition systems have long struggled with higher error rates for women and people of color -- error rates that can translate directly into more stops and arrests for marginalized groups. And while some companies have responded with public bias testing, Amazon hasn't shared any data on the issue, if it's collected data at all.


A pioneer in predictive policing is starting a troubling new project

#artificialintelligence

Jeff Brantingham is as close as it gets to putting a face on the controversial practice of "predictive policing." Over the past decade, the University of California-Los Angeles anthropology professor adapted his Pentagon-funded research in forecasting battlefield casualties in Iraq to predicting crime for American police departments, patenting his research and founding a for-profit company named PredPol, LLC. PredPol quickly became one of the market leaders in the nascent field of crime prediction around 2012, but also came under fire from activists and civil libertarians who argued the firm provided a sort of "tech-washing" for racially biased, ineffective policing methods. Now, Brantingham is using military research funding for another tech and policing collaboration with potentially damaging repercussions: using machine learning, the Los Angeles Police Department's criminal data, and an outdated gang territory map to automate the classification of "gang-related" crimes. Being classified as a gang member or related to a gang crime can result in additional criminal charges, heavier prison sentences, or inclusion in a civil gang injunction that restricts a person's movements and ability to associate with other people.


Axon launches AI ethics board to study the dangers of facial recognition

#artificialintelligence

Axon, formerly known as Taser, has launched a new "AI ethics board" to guide its use of artificial intelligence. The board will meet twice a year to discuss the ethical implications of upcoming Axon products, particularly how their use might affect community policing. Privacy groups responded to the news by urging the board to pay close attention to Axon's development of facial recognition technology. The use of real-time facial recognition in policing has become a contentious topic, as police forces in the UK and China begin testing the technology in public. The UK has installed CCTV cameras with facial recognition to scan for hooligans at soccer games, while Chinese police have integrated the technology into sunglasses to scan travelers at train stations.


Ending Racial Biases in Face Recognition AI – Kairos – Medium

#artificialintelligence

This resonates with me very personally as a minority founder in the face recognition space. So deeply in fact, that I actually wrote about my thoughts in an October 2016 article titled "Kairos' Commitment to Your Privacy and Facial Recognition Regulations" wherein I acknowledged the impact of the problem, and expressed Kairos' position on the importance of rectification.



Racial Bias in Facial Recognition Software - Algorithmia Blog

#artificialintelligence

We've all heard about racial bias in artificial intelligence via the media, whether it's found in recidivism software or object detection that mislabels African American people as Gorillas. Due to the increase in the media attention, people have grown more aware that implicit bias occurring in people can affect the AI systems we build.


A Law Enforcement A.I. Is No More or Less Biased Than People

#artificialintelligence

Some people champion artificial intelligence as a solution to the kinds of biases that humans fall prey to. Even simple statistical tools can outperform people at tasks in business, medicine, academia, and crime reduction. Others chide AI for systematizing bias, which it can do even when bias is not programmed in. In 2016, ProPublica released a much-cited report arguing that a common algorithm for predicting criminal risk showed racial bias. Now a new research paper reveals that, at least in the case of the algorithm covered by ProPublica, neither side has much to get worked up about. The algorithm was no more or less accurate or fair than people. What's more, the paper shows that in some cases the need for advanced AI may be overhyped.


Mathwashing: How Algorithms Can Hide Gender and Racial Biases - The New Stack

#artificialintelligence

Scholars have long pointed out that the way languages are structured and used can say a lot about the worldview of their speakers: what they believe, what they hold sacred, and what their biases are. We know humans have their biases, but in contrast, many of us might have the impression that machines are somehow inherently objective. But does that assumption apply to a new generation of intelligent, algorithmically driven machines that are learning our languages and training from human-generated datasets? By virtue of being designed by humans, and by learning natural human languages, might these artificially intelligent machines also pick up on some of those same human biases too?


Researchers combat gender and racial bias in Artificial Intelligence

#artificialintelligence

When Timnit Gebru was a student at Stanford University's prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the US. While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias--racial, gender, socio-economic. She was also horrified by a report that found a computer programme widely used to predict whether a criminal will re-offend discriminated against people of colour. So earlier this year, Gebru, 34, joined a Microsoft Corp team called FATE--for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.