Civil Rights & Constitutional Law


Artificial intelligence won't save the internet from porn

Engadget

If we can't agree on what constitutes pornography, we can't effectively teach our computers to "know it when they see it." In the early days of the world wide web, US libraries and schools implemented filters based on rudimentary keyword searches in order to remain in compliance with the Child Internet Protection Act. A 2006 report on internet filtering from NYU's Brennan Center for Justice referred to early keyword filters and their AI successors as "powerful, often irrational, censorship tools." But as the New York Times reported, Facebook reinstated the original post after thousands of users posted the photo to their timelines in protest.


Facial recognition out of control? Half of US adults have their faces on police databases

ZDNet

The research distinguishes between using face recognition to check the ID of someone being legally held and scans of people walking by surveillance cameras. New research shines a light on US law enforcement's unchecked use of surveillance cameras and facial recognition to scan the public. According to the study, up to 30 states in the US allow law enforcement to run facial recognition search and matching against driver's license ID photo databases. "A face-recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver's license database, or continuous, real-time scans of people walking by a surveillance camera.


Rights groups request U.S. probe police use of facial recognition

Daily Mail

Researchers even found The Maricopa County Sheriff's Office in Arizona has enrolled all of Honduras' driver's licenses and mug shots into its database. 'By using face recognition to scan the faces on 26 states' driver's license and ID photos, police and the FBI have basically enrolled half of all adults in a massive virtual line-up. Researchers even found The Maricopa County Sheriff's Office in Arizona has enrolled all of Honduras' driver's licenses and mug shots into its database. The Maricopa County Sheriff's Office in Arizona has enrolled all of Honduras' driver's licenses and mug shots into its database.


Cops Have a Database of 117M Faces. You're Probably in It

WIRED

But a new, comprehensive report on the status of facial recognition as a tool in law enforcement shows the sheer scope and reach of the FBI's database of faces and those of state-level law enforcement agencies: Roughly half of American adults are included in those collections. The 150-page report, released on Tuesday by the Center for Privacy & Technology at the Georgetown University law school, found that law enforcement databases now include the facial recognition information of 117 million Americans, about one in two U.S. adults. Meanwhile, since law enforcement facial recognition systems often include mug shots and arrest rates among African Americans are higher than the general population, algorithms may be disproportionately able to find a match for black suspects. In reaction to the report, a coalition of more than 40 civil rights and civil liberties groups, including the American Civil Liberties Union and the Leadership Conference for Civil and Human Rights launched an initiative on Tuesday asking the Department of Justice's Civil Rights Division to evaluate current use of facial recognition technology around the country.


Artificial Intelligence's White Guy Problem

#artificialintelligence

ACCORDING to some prominent voices in the tech world, artificial intelligence presents a looming existential threat to humanity: Warnings by luminaries like Elon Musk and Nick Bostrom about "the singularity" -- when machines become smarter than humans -- have attracted millions of dollars and spawned a multitude of conferences. But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces. We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future.


A beauty contest was judged by AI and the robots didn't like dark skin

#artificialintelligence

That's despite the fact that, although the majority of contestants were white, many people of color submitted photos, including large groups from India and Africa. Beauty.AI – which was created by a "deep learning" group called Youth Laboratories and supported by Microsoft – relied on large datasets of photos to build an algorithm that assessed beauty. Although the group did not build the algorithm to treat light skin as a sign of beauty, the input data effectively led the robot judges to reach that conclusion. A ProPublica investigation earlier this year found that software used to predict future criminals is biased against black people, which can lead to harsher sentencing.


Microsoft builds new AI bot to ignore Hitler

#artificialintelligence

It did, however, identify other Nazi leaders like Joseph Mengele and Joseph Goebbels. Microsoft (MSFT) released CaptionBot a few weeks after its disastrous social experiment with Tay, an automated chat program designed to talk like a teen. Related: Microsoft'deeply sorry' for chat bot's racist tweets In addition to ignoring pictures of Hitler, CaptionBot also seemed to refuse to identify people like Osama bin Laden. Generally speaking, bots are software programs designed to hold conversations with people about data-driven tasks, such as managing schedules or retrieving data and information.


CaptionBot is Microsoft's latest AI experiment - and at least it isn't racist

#artificialintelligence

After the somewhat awkward experience last month of having an AI Twitter bot go full-on racist in a few hours once it interacted with humans, Microsoft have released a new AI experiment on to the internet - CaptionBot. Of course, we immediately tested CaptionBot by uploading pictures from Doctor Who, with admittedly very little success. The internet has also wasted no time testing the limits of the technology - uploading pictures of Hitler to see whether they can make CaptionBot racist, or making it the butt of some pretty good jokes. Microsoft's image captioning tool sees through the so-called "moon landings" https://t.co/WWr7O1XeE3 I was hoping to get a definite answer from https://t.co/b5DYRwRxWz On the ranking of slightly dodgy AIs, not recognising K-9 is a significant step up from "genocidal racist", so congratulations Microsoft.