Goto

Collaborating Authors

'This is bigger than just Timnit': How Google tried to silence a critic and ignited a movement

#artificialintelligence

Timnit Gebru--a giant in the world of AI and then co-lead of Google's AI ethics team--was pushed out of her job in December. Gebru had been fighting with the company over a research paper that she'd coauthored, which explored the risks of the AI models that the search giant uses to power its core products--the models are involved in almost every English query on Google, for instance. The paper called out the potential biases (racial, gender, Western, and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors' names taken off; Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru's departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff--despite Gebru's credentials and history of groundbreaking research.


Oxford Handbook on AI Ethics Book Chapter on Race and Gender

arXiv.org Artificial Intelligence

From massive face-recognition-based surveillance and machine-learning-based decision systems predicting crime recidivism rates, to the move towards automated health diagnostic systems, artificial intelligence (AI) is being used in scenarios that have serious consequences in people's lives. However, this rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial face recognition systems have much higher error rates for dark skinned women while having minimal errors on light skinned men. A 2016 ProPublica investigation uncovered that machine learning based tools that assess crime recidivism rates in the US are biased against African Americans. Other studies show that natural language processing tools trained on newspapers exhibit societal biases (e.g. finishing the analogy "Man is to computer programmer as woman is to X" by homemaker). At the same time, books such as Weapons of Math Destruction and Automated Inequality detail how people in lower socioeconomic classes in the US are subjected to more automated decision making tools than those who are in the upper class. Thus, these tools are most often used on people towards whom they exhibit the most bias. While many technical solutions have been proposed to alleviate bias in machine learning systems, we have to take a holistic and multifaceted approach. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.


The Dark Side of Big Tech's Funding for AI Research

WIRED

Last week, prominent Google artificial intelligence researcher Timnit Gebru said she was fired by the company after managers asked her to retract or withdraw her name from a research paper, and she objected. Google maintains that she resigned, and Alphabet CEO Sundar Pichai said in a company memo on Wednesday that he would investigate what happened. The episode is a pointed reminder of tech companies' influence and power over their field. Big companies pump out influential research papers, fund academic conferences, compete to hire top researchers, and own the data centers required for large-scale AI experiments. A recent study found that the majority of tenure-track faculty at four prominent universities that disclose funding sources had received backing from Big Tech.


Black Feminist Musings on Algorithmic Oppression

arXiv.org Artificial Intelligence

This paper unapologetically reflects on the critical role that Black feminism can and should play in abolishing algorithmic oppression. Positioning algorithmic oppression in the broader field of feminist science and technology studies, I draw upon feminist philosophical critiques of science and technology and discuss histories and continuities of scientific oppression against historically marginalized people. Moreover, I examine the concepts of invisibility and hypervisibility in oppressive technologies a l\'a the canonical double bind. Furthermore, I discuss what it means to call for diversity as a solution to algorithmic violence, and I critique dialectics of the fairness, accountability, and transparency community. I end by inviting you to envision and imagine the struggle to abolish algorithmic oppression by abolishing oppressive systems and shifting algorithmic development practices, including engaging our communities in scientific processes, centering marginalized communities in design, and consensual data and algorithmic practices.


Google workers reject company's account of AI researcher's exit as anger grows

The Guardian

Outcry is growing within Google over the treatment of the AI ethics researcher Timnit Gebru, with Gebru's colleagues challenging the company's account of her exit in an open letter. In a letter posted on Monday on Medium, Gebru's colleagues disputed an executive's claim that she had resigned and called internal research policies into question. "Dr Gebru did not resign, despite what Jeff Dean (Senior Vice President and head of Google Research), has publicly stated," the letter reads before going into detail about the events that led to Gebru's dismissal. Gebru, a Black female scientist who is highly respected in her field, said on Twitter last week that she had been fired after sending an email to an internal company group for women and allies, expressing frustration over discrimination at Google and a dispute over one of her papers that was retracted after initially being approved for publication. The paper in question examined the ethical issues associated with AI language technology and reportedly mentions Google's own software, which is important to the company's business model development.