Civil Rights & Constitutional Law


The Age of Thinking Machines

#artificialintelligence

We live in the greatest time in human history. Only 200 years ago, for most Europeans, life was a struggle rather than a pleasure. Without antibiotics and hospitals, every infection was fatal. There was only a small elite of citizens who lived in the cities in relative prosperity. Freedom of opinion, human and civil rights were far away. Voting rights and decision-making were reserved for a class consisting of nobility, clergy, the military and rich citizens. The interests of the general population were virtually ignored.


We can't just regulate -- we must teach our AIs values

#artificialintelligence

International human rights attorney & author -- Flynn Coleman has worked with the United Nations, the United States federal government, and international corporations and human rights organizations around the world. Coleman has written extensively… (show all) Flynn Coleman has worked with the United Nations, the United States federal government, and international corporations and human rights organizations around the world. Coleman has written extensively on issues of global citizenship, the future of work and purpose, political reconciliation, war crimes, genocide, human and civil rights, humanitarian issues, innovation and design for social impact, and improving access to justice and education. She lives in New York City. A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are is her first book.


"Biologically inspired" A.I can beat the world's strictest internet censorship

#artificialintelligence

Countries like China, Iran and Russia are known for strictly censoring what their citizens can see on the internet. These authoritarian governments do this to control their people and protect those in power. It can be very difficult, and often dangerous, to try to get around this, but a new tool looks like it could be the best way to beat censorship in these kinds of oppressive countries. Researchers at the University of Maryland have developed a kind of AI that they've named Geneva, which stands for "Genetic Evasion." This AI uses a kind of machine learning to automatically detect bugs and gaps in a country's censorship system so the user can view uncensored content.


The Age of Thinking Machines

#artificialintelligence

We live in the greatest time in human history. Only 200 years ago, for most Europeans, life was a struggle rather than a pleasure. Without antibiotics and hospitals, every infection was fatal. There was only a small elite of citizens who lived in the cities in relative prosperity. Freedom of opinion, human and civil rights were far away. Voting rights and decision-making were reserved for a class consisting of nobility, clergy, the military and rich citizens. The interests of the general population were virtually ignored.


Highlights: Addressing fairness in the context of artificial intelligence

#artificialintelligence

When society uses artificial intelligence (AI) to help build judgments about individuals, fairness and equity are critical considerations. On Nov. 12, Brookings Fellow Nicol Turner-Lee sat down with Solon Barocas of Cornell University, Natasha Duarte of the Center for Democracy & Technology, and Karl Ricanek of the University of North Carolina Wilmington to discuss artificial intelligence in the context of societal bias, technological testing, and the legal system. Artificial intelligence is an element of many everyday services and applications, including electronic devices, online search engines, and social media platforms. In most cases, AI provides positive utility for consumers--such as when machines automatically detect credit card fraud or help doctors assess health care risks. However, there is a smaller percentage of cases, such as when AI helps inform decisions on credit limits or mortgage lending, where technology has a higher potential to augment historical biases.


Opinion: Worried about how facial recognition technology is being used? You should be

#artificialintelligence

If you're worried about how facial recognition technology is being used, you should be. And things are about to get a lot scarier unless new regulation is put in place. Already, this technology is being used in many U.S. cities and around the world. Rights groups have raised alarm about its use to monitor public spaces and protests, to track and profile minorities, and to flag suspects in criminal investigations. The screening of travelers, concertgoers and sports fans with the technology has also sparked privacy and civil liberties concerns.


To avoid bias, AI needs to 'explain' itself

#artificialintelligence

Can a credit card be sexist? It's not a question most people would have thought about before this week, but on Monday, state regulators in New York announced an investigation into claims of gender discrimination by Apple Card. The algorithms Apple Card used to set credit limits are, it has been reported, inherently biased against women. Tech entrepreneur David Heinemeier Hansson (@DHH) claimed that the card offered him 20 times more credit than his wife, even though she had the better credit score, while Apple's own co-founder Steve Wozniak went to Twitter with a similar story, despite he and his wife sharing bank accounts and assets. Goldman Sachs, the New York bank that backs the Apple Card, released a statement rejecting this assertion, saying that when it comes to assessing credit, they "have not and will not make decisions based on gender."


Hikvision Markets Uyghur Ethnicity Analytics, Now Covers Up

#artificialintelligence

Hikvision has marketed an AI camera that automatically identifies Uyghurs, on its China website, only covering it up days ago after IPVM questioned them on it. This AI technology allows the PRC to automatically track Uyghur people, one of the world's most persecuted minorities. Hikvision's product description states this camera supports Uyghur recognition (screenshot via Google Translate): Capable of analysis on target personnel's sex (male, female), ethnicity (such as Uyghurs, Han) and color of skin (such as white, yellow, or black), whether the target person wears glasses, masks, caps, or whether he has beard, with an accuracy rate of no less than 90%. By April 2019, Hikvision was well-aware of the human rights issues surrounding Xinjiang; that same month, they disclosed in their ESG report that they had "recently commissioned an internal review" on the matter. The PRC officially recognizes 56 ethnic groups, which the Chinese ambassador recently described as being'part of big family of Chinese nation'.


Researchers develop AI tool to evade Internet censorship

#artificialintelligence

Internet censorship, basically, is a very effective strategy used by dictatorial governments to limit access to information available online for controlling freedom of expression and prevent rebellion and discord. Countries at the forefront of adopting Internet censorship, as per the findings of the 2019 Freedom House report, are India and China as these are declared to be the worst abusers of digital freedom. Conversely, the US, Brazil, Sudan, and Kazakhstan are the countries where Internet freedom has considerably declined recently. When a country curbs Internet freedom, activists need to find ways to evade it. However, they may not need to manually search for it now that "Geneva" is here.


Apple Card controversy: Artificial intelligence learned its gender bias from Silicon Valley, tech expert says

#artificialintelligence

Catalyst president and CEO Lorraine Hariton, who works with Fortune 500 companies to eliminate bias in their technology and systems, gives her thoughts on the controversy surrounding gender and the new Apple Card's algorithm. She says artificial intelligence can become biased if leaders and teams aren't diverse and inclusive. The Apple Card gender bias allegation is a lesson for Silicon Valley, which has suffered from sexism issues for a long time, according to one tech expert. Apple made headlines Sunday when the artificial intelligence algorithm behind its new Apple Card, in partnership with Goldman Sachs, was accused of gender discrimination after Apple co-founder Steve Wozniak and another male tech entrepreneur said they got much higher lines of credit for their card applications than their wives did. Catalyst president and CEO Lorraine Hariton, who works with Fortune 500 companies to eliminate bias in technology and systems, joined FOX Business' Liz Claman on Friday and said she was not surprised by the accusation.