Civil Rights & Constitutional Law


Stanford professor says face-reading AI will detect IQ

Daily Mail

Stanford researcher Dr Michal Kosinski went viral last week after publishing research (pictured) suggesting AI can tell whether someone is straight or gay based on photos. Stanford researcher Dr Michal Kosinki claims he is working on AI software that can identify political beliefs, with preliminary results proving positive. Dr Kosinki claims he is now working on AI software that can identify political beliefs, with preliminary results proving positive. Dr Kosinki claims he is now working on AI software that can identify political beliefs, with preliminary results proving positive.


AIs that learn from photos become sexist

Daily Mail

In the fourth example, the person pictured is labeled'woman' even though it is clearly a man because of sexist biases in the set that associate kitchens with women Researchers tested two of the largest collections of photos used to train image recognition AIs and discovered that sexism was rampant. However, they AIs associated men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment. 'For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time,' reads the paper, titled'Men Also Like Shopping,' which published as part of the 2017 Conference on Empirical Methods on Natural Language Processing. A user shared a photo depicting another scenario in which technology failed to detect darker skin, writing'reminds me of this failed beta test Princeton University conducted a word associate task with the algorithm GloVe, an unsupervised AI that uses online text to understand human language.


FaceApp 'Racist' Filter Shows Users As Black, Asian, Caucasian And Indian

International Business Times

In addition to these blatantly racial face filters – which change everything from hair color to skin tone to eye color – other FaceApp users noted earlier this year that the "hot" filter consistently lightens people's skin color. FaceApp CEO Yaroslav Goncharov defended the Asian, Black, Caucasian and Indian filters in an email to The Verge: "The ethnicity change filters have been designed to be equal in all aspects," he told The Verge over email. Goncharov explained the "hot" filter backlash as an "unfortunate side-effect of the underlying neural network caused by the training set bias." "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behavior."


Artificial intelligence won't save the internet from porn

Engadget

If we can't agree on what constitutes pornography, we can't effectively teach our computers to "know it when they see it." In the early days of the world wide web, US libraries and schools implemented filters based on rudimentary keyword searches in order to remain in compliance with the Child Internet Protection Act. A 2006 report on internet filtering from NYU's Brennan Center for Justice referred to early keyword filters and their AI successors as "powerful, often irrational, censorship tools." But as the New York Times reported, Facebook reinstated the original post after thousands of users posted the photo to their timelines in protest.


Facial recognition out of control? Half of US adults have their faces on police databases

ZDNet

The research distinguishes between using face recognition to check the ID of someone being legally held and scans of people walking by surveillance cameras. New research shines a light on US law enforcement's unchecked use of surveillance cameras and facial recognition to scan the public. According to the study, up to 30 states in the US allow law enforcement to run facial recognition search and matching against driver's license ID photo databases. "A face-recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver's license database, or continuous, real-time scans of people walking by a surveillance camera.


Rights groups request U.S. probe police use of facial recognition

Daily Mail

Researchers even found The Maricopa County Sheriff's Office in Arizona has enrolled all of Honduras' driver's licenses and mug shots into its database. 'By using face recognition to scan the faces on 26 states' driver's license and ID photos, police and the FBI have basically enrolled half of all adults in a massive virtual line-up. Researchers even found The Maricopa County Sheriff's Office in Arizona has enrolled all of Honduras' driver's licenses and mug shots into its database. The Maricopa County Sheriff's Office in Arizona has enrolled all of Honduras' driver's licenses and mug shots into its database.


Cops Have a Database of 117M Faces. You're Probably in It

WIRED

But a new, comprehensive report on the status of facial recognition as a tool in law enforcement shows the sheer scope and reach of the FBI's database of faces and those of state-level law enforcement agencies: Roughly half of American adults are included in those collections. The 150-page report, released on Tuesday by the Center for Privacy & Technology at the Georgetown University law school, found that law enforcement databases now include the facial recognition information of 117 million Americans, about one in two U.S. adults. Meanwhile, since law enforcement facial recognition systems often include mug shots and arrest rates among African Americans are higher than the general population, algorithms may be disproportionately able to find a match for black suspects. In reaction to the report, a coalition of more than 40 civil rights and civil liberties groups, including the American Civil Liberties Union and the Leadership Conference for Civil and Human Rights launched an initiative on Tuesday asking the Department of Justice's Civil Rights Division to evaluate current use of facial recognition technology around the country.


Artificial Intelligence's White Guy Problem

#artificialintelligence

ACCORDING to some prominent voices in the tech world, artificial intelligence presents a looming existential threat to humanity: Warnings by luminaries like Elon Musk and Nick Bostrom about "the singularity" -- when machines become smarter than humans -- have attracted millions of dollars and spawned a multitude of conferences. But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces. We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future.


A beauty contest was judged by AI and the robots didn't like dark skin

#artificialintelligence

That's despite the fact that, although the majority of contestants were white, many people of color submitted photos, including large groups from India and Africa. Beauty.AI – which was created by a "deep learning" group called Youth Laboratories and supported by Microsoft – relied on large datasets of photos to build an algorithm that assessed beauty. Although the group did not build the algorithm to treat light skin as a sign of beauty, the input data effectively led the robot judges to reach that conclusion. A ProPublica investigation earlier this year found that software used to predict future criminals is biased against black people, which can lead to harsher sentencing.


Racism and other biases in artificial intelligence algorithms

#artificialintelligence

According to some prominent voices in the tech world, artificial intelligence presents a looming existential threat to humanity: Warnings by luminaries like Mr Elon Musk and Professor Nick Bostrom about "the singularity" - when machines become smarter than humans - have attracted millions of dollars and spawned a multitude of conferences. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes.