racial bias


Ending Racial Biases in Face Recognition AI – Kairos – Medium

#artificialintelligence

This resonates with me very personally as a minority founder in the face recognition space. So deeply in fact, that I actually wrote about my thoughts in an October 2016 article titled "Kairos' Commitment to Your Privacy and Facial Recognition Regulations" wherein I acknowledged the impact of the problem, and expressed Kairos' position on the importance of rectification.



Racial Bias in Facial Recognition Software - Algorithmia Blog

#artificialintelligence

We've all heard about racial bias in artificial intelligence via the media, whether it's found in recidivism software or object detection that mislabels African American people as Gorillas. Due to the increase in the media attention, people have grown more aware that implicit bias occurring in people can affect the AI systems we build.


Mathwashing: How Algorithms Can Hide Gender and Racial Biases - The New Stack

#artificialintelligence

Scholars have long pointed out that the way languages are structured and used can say a lot about the worldview of their speakers: what they believe, what they hold sacred, and what their biases are. We know humans have their biases, but in contrast, many of us might have the impression that machines are somehow inherently objective. But does that assumption apply to a new generation of intelligent, algorithmically driven machines that are learning our languages and training from human-generated datasets? By virtue of being designed by humans, and by learning natural human languages, might these artificially intelligent machines also pick up on some of those same human biases too?


Researchers combat gender and racial bias in Artificial Intelligence

#artificialintelligence

When Timnit Gebru was a student at Stanford University's prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the US. While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias--racial, gender, socio-economic. She was also horrified by a report that found a computer programme widely used to predict whether a criminal will re-offend discriminated against people of colour. So earlier this year, Gebru, 34, joined a Microsoft Corp team called FATE--for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.


Researchers Combat Gender and Racial Bias in Artificial Intelligence

#artificialintelligence

When Timnit Gebru was a student at Stanford University's prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S. While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias--racial, gender, socio-economic. She was also horrified by a ProPublica report that found a computer program widely used to predict whether a criminal will re-offend discriminated against people of color.


Google Expands Troll-Fighting Tool Amid Concerns of Racial Bias

#artificialintelligence

A unit of Google dedicated to policy and ideas is pushing forward with a plan to tame toxic comments on the Internet--even as critics warn that its technology, which relies on AI-powered algorithms, can promote the sort of sexism and racism Google is trying to diminish. On Thursday, the unit known as Jigsaw announced a new community page where developers can contribute "hacks" to build out its comment-moderation tool. That tool, which is used by the likes of the New York Times and the Guardian, helps publishers moderate Internet comments at scale. The system, which depends on artificial intelligence, allowed the Times to expand the scope of its reader comments tenfold while still maintaining a civil discussion.


Seeing the World Through Google's Eyes – The Ringer

#artificialintelligence

Amid a handful of announcements at the I/O developers' conference Wednesday, Google introduced a feature that allows you to Shazam the world. Point it at a concert poster, and your screen will pull up tickets. Point it at a flower, and it will school you on the species. The feature was a crowd favorite. At no other point in the company's two-hour keynote did the audience cheer so loudly than when engineering VP Scott Huffman aimed the camera at a router's network information and automatically logged onto the Wi-Fi.


Durham Police to use AI to predict future crimes of suspects, despite racial bias concerns

The Independent

Police officers in Durham will soon use artificial intelligence to determine whether a suspect should be kept in custody or released on bail. The system, which is called the Harm Assessment Risk Tool (Hart), has been trained using Durham Constabulary data collected from 2008 to 2013, and will also consider a suspect's gender and postcode. It's designed to help officers assess how risky it would be to release suspects. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph. The giant human-like robot bears a striking resemblance to the military robots starring in the movie'Avatar' and is claimed as a world first by its creators from a South Korean robotic company Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session A man looks at an exhibit entitled'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S The Jaguar I-PACE Concept car is the start of a new era for Jaguar.


AI Learns Gender and Racial Biases from Language

IEEE Spectrum Robotics Channel

Tech giants and startups that use machine learning--especially cutting-edge deep learning algorithms--will need to grapple with the potential biases in their AI systems sooner rather than later. So far there seems to be more growing awareness and discussion of the problem rather than any systematic agreement on how to handle bias in machine learning AI, Friedler explains. One approach involves scrubbing any biases out of the datasets used to train machine learning AI. But that may come at the cost of losing some useful linguistic and cultural meanings. People will need to make tough ethical calls on what bias looks like and how to proceed from there, lest they allow such biases to run unchecked within increasingly powerful and widespread AI systems. "We need to decide which of these biases are linguistically useful and which ones are societally problematic," Friedler says. "And if we decide they're societally problematic, we need to purposely decide to remove this information."