Civil Rights & Constitutional Law


Racist algorithms: how Big Data makes bias seem objective

#artificialintelligence

What's worse is the way that machine learning magnifies these problems. If an employer only hires young applicants, a machine learning algorithm will learn to screen out all older applicants without anyone having to tell it to do so. I recently attended a meeting about some preliminary research on "predictive policing," which uses these machine learning algorithms to allocate police resources to likely crime hotspots. With more engineers participating in policy debates and more policymakers who understand algorithms and big data, both government and civil society organizations will be stronger.


International Conference on Artificial Intelligence and Information

#artificialintelligence

submission: We invite submission for a 30 minute presentation (followed by 10 minute discussion). An extended abstract of approximately 250-500 words should be prepared for blind review and include a cover page with full name, institution, contact information and short bio. Files should be submitted in doc(x) word. Please indicate in the subject of the message the following structure: "First Name Last Name - Track - Title of Abstract" We intend to produce a collected volume based upon contributions to the conference.


Using Sound and Artificial Intelligence to Detect Human Rights Violations

#artificialintelligence

But video footage poses a "Big Data" challenge to human rights organizations. To take on this Big Data challenge, Jay and team have developed a new machine learning-based audio processing system that "enables both synchronization of multiple audio-rich videos of the same event, and discovery of specific sounds (such as wind, screaming, gunshots, airplane noise, music, and explosions) at the frame level within a video." I've been following Jay's applied research for many years now and continue to be a fan of his approach given the overlap with my own work in the use of machine learning to make sense of the Big Data generated during major natural disasters. Effective cross-disciplinary collaboration between computer scientists and human rights (or humanitarian) practitioners is really hard but absolutely essential.


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender. Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. "Language reflects facts about the world," Caliskan told Ars.


Algorithms aren't racist. Your skin is just too dark.

#artificialintelligence

Lately, I have been in the press discussing the need for more inclusive artificial intelligence and more representative data sets. One way to deal with the challenges of illumination is by training a facial detection system on a set of diverse images with a variety of lighting conditions. My face is visible to a human eye as is the face of my demonstration partner, but the human eye and the visual cortex that processes its input are far more advanced than a humble web camera. Who has to take extra steps to make technology work?


Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case

#artificialintelligence

One of the biggest civil liberties issues raised by technology today is whether, when, and how we allow computer algorithms to make decisions that affect people's lives. And bad data produces bad results. Idaho's Medicaid bureaucracy was making arbitrary and irrational decisions with big impacts on people's lives, and fighting efforts to make it explain how it was reaching those decisions. As our technological train hurtles down the tracks, we need policymakers at the federal, state, and local level who have a good understanding of the pitfalls involved in using computers to make decisions that affect people's lives.


Facebook user convicted for Liking 'defamatory' comments after vegan street food festival row

The Independent

The comments, which were posted in 2015, accused Erwin Kessler, the president of animal rights group Verein gegen Tierfabriken, of racism and anti-Semitism. The giant human-like robot bears a striking resemblance to the military robots starring in the movie'Avatar' and is claimed as a world first by its creators from a South Korean robotic company Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session A man looks at an exhibit entitled'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S The Jaguar I-PACE Concept car is the start of a new era for Jaguar.


When algorithms are racist

The Guardian

Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. Thinking about yourself – growing up in Mississippi, a Rhodes Scholar, a Fulbright Fellow and now at MIT – do you wonder that if those admissions decisions had been taken by algorithms you might not have ended up where you are?


Emerging Ethical Concerns In the Age of Artificial Intelligence

#artificialintelligence

Science fiction novels have long delighted readers by grappling with futuristic challenges like the possibility of artificial intelligence so difficult to distinguish from human beings that people naturally ask, "should these sophisticated computer programs be considered human? Tech industry luminaries such as Tesla CEO Elon Musk have recently endorsed concepts like guaranteed minimum income or universal basic income. Bill Gates recently made headlines with a proposal to impose a "robot tax" -- essentially, a tax on automated solutions to account for the social costs of job displacement. Technology challenges our conception of human rights in other ways, as well.


Future of Humanity Institute

#artificialintelligence

The Future of Humanity Institute (FHI) will be joining the Partnership on AI, a non-profit organisation founded by Amazon, Apple, Google/DeepMind, Facebook, IBM, and Microsoft, with the goal of formulating best practices for socially beneficial AI development. We will be joining the Partnership alongside technology firms like Sony as well as third sector groups like Human Rights Watch, UNICEF, and our partners in Cambridge, the Leverhulme Centre for the Future of Intelligence. The Partnership on AI is organised around a set of thematic pillars, including Fair, transparent, and accountable AI, and AI and social good; FHI is will focus its work on the first of these pillars: Safety-critical AI. The full list of new partners includes the AI Forum of New Zealand (AIFNZ), Allen Institute for Artificial Intelligence (AI2), Centre for Democracy & Technology (CDT), Centre for Internet and Society, India (CIS), Cogitai, Data & Society Research Institute (D&S), Digital Asia Hub, eBay, Electronic Frontier Foundation (EFF), Future of Humanity Institute (FHI), Future of Privacy Forum (FPF), Human Rights Watch (HRW), Intel, Leverhulme Centre for the Future of Intelligence (CFI), McKinsey & Company, SAP, Salesforce.com,