civil rights & constitutional law

Geena Davis announces 'Spellcheck for Bias' tool to redress gender imbalance in movies


Actor and equality campaigner Geena Davis has announced that Disney has adopted a digital tool that will analyse scripts and identify opportunities to rectify any gender and ethnic biases. Davis, founder of the Geena Davis Institute on Gender in Media, was speaking at the Power of Inclusion event in New Zealand, where she outlined the development of GD-IQ: Spellcheck for Bias, a machine learning tool described as "an intervention tool to infuse diversity and inclusion in entertainment and media". Developed by the University of Southern California Viterbi School of Engineering, the Spellcheck for Bias is designed to analyse a script and determine the percentages of characters' "gender, race, LGBTQIA [and] disabilities". It can also track the percentage of "non-gender-defined speaking characters". Davis said that Disney had partnered with her institute to pilot the project: "We're going to collaborate with Disney over the next year using this tool to help their decision-making [and] identify opportunities to increase diversity and inclusion in the manuscripts that they receive. We're very excited about the possibilities with this new technology and we encourage everybody to get in touch with us and give it a try."

AI in 2019: A Year in Review


Some US airlines are now even using it instead of boarding passes, claiming it's more convenient. There has also been wider use of affect recognition, a subset of facial recognition, which claims to'read' our inner emotions by interpreting the micro-expressions on our face. As psychologist Lisa Feldman Barret showed in an extensive survey paper, this type of AI phrenology has no reliable scientific foundation. But it's already being used in classrooms and job interviews -- often without people's knowledge. For example, documents obtained by the Georgetown Center on Privacy and Technology revealed that the FBI and ICE have been quietly accessing drivers license databases, conducting facial-recognition searches on millions of photos without the consent of individuals or authorization from state or federal lawmakers.

Oh dear... AI models used to flag hate speech online are, er, racist against black people


The internet is filled with trolls spewing hate speech, but machine learning algorithms can't help us clean up the mess. A paper from computer scientists from the University of Washington, Carnegie Mellon University, and the Allen Institute for Artificial Intelligence, found that machines were more likely to flag tweets from black people than white people as offensive. It all boils down to the subtle differences in language. African-American English (AAE), often spoken in urban communities, is peppered with racial slang and profanities. But even if they contain what appear to be offensive words, the message itself often isn't abusive.

De-biasing language


Looking for more information on bias and other pitfalls in AI? Check out the "Ethics, Privacy, and Security" sessions at the AI Conference in New York, April 15–18, 2019. In a recent paper, Hila Gonen and Yoav Goldberg argue that methods for de-biasing language models aren't effective; they make bias less apparent, but don't actually remove it. De-biasing might even make bias more dangerous by hiding it, rather than leaving it out in the open. The toughest problems are often the ones you only think you've solved. Language models are based on "word embeddings," which are essentially lists of word combinations derived from human language.

Flynn Coleman - A Human Algorithm


The Age of Intelligent Machines is upon us, and we are at a reflection point. The proliferation of fast-moving technologies, including forms of artificial intelligence, will cause us to confront profound questions about ourselves. The era of human intellectual superiority is ending, and, as a species, we need to plan for this monumental shift. A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are examines the immense impact intelligent technology will have on humanity. These machines, while challenging our personal beliefs and our socioeconomic world order, also have the potential to transform our health and well-being, alleviate poverty and suffering, and reveal the mysteries of intelligence and consciousness.

UK passport program uses AI to create a virtual speed-line for white people


The lighter your skin, the better AI-powered facial recognition systems work for you. The UK Home Office knows this, because the government's been briefed several times on the problem. And a recent report shows that it knew it was developing a passport program built on biased, racist AI. The UK's passport program went live in 2016. It uses an AI-powered facial recognition feature to determine whether user-uploaded photos meet the requirements and standards for use as a passport photo.

Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter


Microsoft's attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with waggish Twitter users teaching its chatbot how to be racist. The company launched a verified Twitter account for "Tay" – billed as its "AI fam from the internet that's got zero chill" – early on Wednesday. The chatbot, targeted at 18- to 24-year-olds in the US, was developed by Microsoft's technology and research and Bing teams to "experiment with and conduct research on conversational understanding". "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation," Microsoft said. "The more you chat with Tay the smarter she gets."

CEPEJ European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment


The European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe has adopted the first European text setting out ethical principles relating to the use of artificial intelligence (AI) in judicial systems. The Charter provides a framework of principles that can guide policy makers, legislators and justice professionals when they grapple with the rapid development of AI in national judicial processes. The CEPEJ's view as set out in the Charter is that the application of AI in the field of justice can contribute to improve the efficiency and quality and must be implemented in a responsible manner which complies with the fundamental rights guaranteed in particular in the European Convention on Human Rights (ECHR) and the Council of Europe Convention on the Protection of Personal Data. For the CEPEJ, it is essential to ensure that AI remains a tool in the service of the general interest and that its use respects individual rights. Principle "under user control": precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices.

Could blacklisting China's AI champions backfire?


Just over two years ago, China announced an audacious plan to overtake the US and lead the "world in AI [artificial intelligence] technology and applications by 2030". It is already widely regarded to have overtaken the EU in many aspects. But now its plans may be knocked off course by the US restricting certain Chinese companies from buying technologies developed or manufactured in the States. Washington's justification is that the organisations involved have made products used to commit human rights abuses against China's Muslim ethnic minorities. But it is notable that those on its blacklist include many of China's official "national AI champions", among them: Like the telecoms firm Huawei before them, they now face major disruption as a consequence of the Trump administration's intervention.

Rashida Tlaib calls for ban on facial recognition tech after telling Detroit police to hire only black analysts

FOX News

Police chief calls Tlaib's comments racist; Democratic strategist Monique Pressley and Blexit Movement founder Candace Owens react. Rep. Rashida Tlaib, D-Mich., last week responded to backlash after she told Detroit police to hire only black facial recognition analysts, writing in a scathing op-ed that her comments were neither "racist" nor "inappropriate" and pushed further for a total ban of the technology used to identify criminal suspects. "I'm going to call out every injustice I see. It's probably what makes most people uncomfortable when I speak the truth," Tlaib wrote in an op-ed in The Detroit News. "My comments weren't racist, out of order, or "inappropriate." It is inappropriate to implement a broken, flawed and racist technology that doesn't recognize black and brown faces in a city that is over 80% black."