MIT Researcher: AI Has a Race Problem, and We Need to Fix It


The next generation of AI is poisoned with bias against dark skin, Joy Buolamwini says. Artificial intelligence is increasingly affecting our lives in ways most of us haven't even thought about. Even if we don't have emotional androids plotting revenge on humankind (yet), we're surrounded more and more by computers trained to look us over and make life-changing decisions about us. Some of the brightest minds in technology--including a hive of them clustered around Boston--are tinkering with machines designed to decide what kinds of ads we see, whether we get flagged by the police, whether we get a job, or even how long we spend behind bars. But they have a very big problem: Many of these systems don't work properly, or at all, for people with dark skin.

AI could help government agencies find the optimum places for refugees to relocate


In 2016, an estimated 65.6 million people across the globe were forced from their homes by everything from war to human rights violations.

New iPhone brings face recognition (and fears) to masses

Daily Mail

Apple will let you unlock the iPhone X with your face - a move likely to bring facial recognition to the masses. But along with the roll out of the technology, are concerns over how it could be used. Despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would'normalise' the technology. This could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused'. Facial recognition could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused', experts have warned.

New iPhone brings face recognition --and fears -- to the masses

The Japan Times

WASHINGTON – Apple will let you unlock the iPhone X with your face -- a move likely to bring facial recognition to the masses, along with concerns over how the technology may be used for nefarious purposes. Apple's newest device, set to go on sale on Friday, is designed to be unlocked with a facial scan with a number of privacy safeguards -- as the data will only be stored on the phone and not in any databases. Unlocking one's phone with a face scan may offer added convenience and security for iPhone users, according to Apple, which claims its "neural engine" for FaceID cannot be tricked by a photo or hacker. While other devices have offered facial recognition, Apple is the first to pack the technology allowing for a three-dimensional scan into a hand-held phone. But despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would "normalize" the technology and open the door to broader use by law enforcement, marketers or others of a largely unregulated tool.

Tool checks whether websites have built-in prejudice

Daily Mail

From reports Amazon's same-day delivery is less available in black neighbourhoods to Microsoft's'racist' chatbots, signs of online prejudice are becoming increasingly common. Scientists now say they can spot racist and sexist software using a code that finds out if there is implicit bias in algorithms running on websites and apps. By changing specific variables - such as race, gender or other distinctive traits - the online code Themis claims to know if data is discriminating against specific people. Previous research suggests technology is generally becoming racist and sexist as it learns from humans - and as a result, hindering its ability to make balanced decisions. Themis is a freely available code that mimics the process of entering data - such as making a loan application - into a given website or app.

AIs that learn from photos become sexist

Daily Mail

Image recognition AIs that have been trained by some of the most-used research-photo collections are developing sexist biases, according to a new study. University of Virginia computer science professor Vicente Ordóñez and colleagues tested two of the largest collections of photos and data used to train these types of AIs (including one supported by Facebook and Microsoft) and discovered that sexism was rampant. He began the research after noticing a disturbing pattern of sexism in the guesses made by the image recognition software he was building. 'It would see a picture of a kitchen and more often than not associate it with women, not men,' Ordóñez told Wired, adding it also linked women with images of shopping, washing, and even kitchen objects like forks. The AI was also associating men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment.

Rise of the racist robots – how AI is learning all our worst impulses


In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica. Compas and programs similar to it were in use in hundreds of courts across the US, potentially informing the decisions of judges and other officials. The message seemed clear: the US justice system, reviled for its racial bias, had turned to technology for help, only to find that the algorithms had a racial bias too. How could this have happened?

If you weren't raised in the Internet age, you may need to worry about workplace age discrimination

Los Angeles Times

Although people of both genders struggle with age discrimination, research has shown women begin to experience age discrimination in hiring practices before they reach 50, whereas men don't experience it until several years later. Just as technology is causing barriers inside the workplace for older employees, online applications and search engines could be hurting older workers looking for jobs. Many applications have required fields asking for date of birth and high school graduation, something many older employees choose to leave off their resumes. Furthermore, McCann said, some search engines allow people to filter their search based on high school graduation date, thereby allowing employers and employees to screen people and positions out of the running.

Princeton researchers discover why AI become racist and sexist


Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender. Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. "Language reflects facts about the world," Caliskan told Ars.

Police Using Technology To Fight Crime Threatens Black Neighborhoods

International Business Times

But the city's new effort seems to ignore evidence, including recent research from members of our policing study team at the Human Rights Data Analysis Group, that predictive policing tools reinforce, rather than reimagine, existing police practices. Machine-learning algorithms learn to make predictions by analyzing patterns in an initial training data set and then look for similar patterns in new data as they come in. Our recent study, by Human Rights Data Analysis Group's Kristian Lum and William Isaac, found that predictive policing vendor PredPol's purportedly race-neutral algorithm targeted black neighborhoods at roughly twice the rate of white neighborhoods when trained on historical drug crime data from Oakland, California. This should start with community members and police departments discussing policing priorities and measures of police performance.