Collaborating Authors

From whistleblower laws to unions: How Google's AI ethics meltdown could shape policy


It's been two weeks since Google fired Timnit Gebru, a decision that still seems incomprehensible. Gebru is one of the most highly regarded AI ethics researchers in the world, a pioneer whose work has highlighted the ways tech fails marginalized communities when it comes to facial recognition and more recently large language models. Of course, this incident didn't happen in a vacuum. Case in point: Gebru was fired the same day the National Labor Review Board (NLRB) filed a complaint against Google for illegally spying on employees and the retaliatory firing of employees interested in unionizing. Gebru's dismissal also calls into question issues of corporate influence in research, demonstrates the shortcomings of self-regulation, and highlights the poor treatment of Black people and women in tech in a year when Black Lives Matter sparked the largest protest movement in U.S. history. In an interview with VentureBeat last week, Gebru called the way she was fired disrespectful and described a companywide memo sent by CEO Sundar Pichai as "dehumanizing." To delve further into possible outcomes following Google's AI ethics meltdown, VentureBeat spoke with five experts in the field about Gebru's dismissal and the issues it raises.

Google employee group urges Congress to strengthen whistleblower protections for AI researchers


Google's decision to fire its AI ethics leaders is a matter of "urgent public concern" that merits strengthening laws to protect AI researchers and tech workers who want to act as whistleblowers. That's according to a letter published by Google employees today in support of the Ethical AI team at Google and former co-leads Margaret Mitchell and Timnit Gebru, who Google fired two weeks ago and in December 2020, respectively. Firing Gebru, one of the best known Black female AI researchers in the world and one of few Black women at Google, drew public opposition from thousands of Google employees. It also led critics to claim the incident may have "shattered" Google's Black talent pipeline and signaled the collapse of AI ethics research in corporate environments. "We must stand up together now, or the precedent we set for the field -- for the integrity of our own research and for our ability to check the power of big tech -- bodes a grim future for us all," reads the letter published by the group Google Walkout for Change.

AI Weekly: Facebook, Google, and the tension between profits and fairness


This week, we learned a lot more about the inner workings of AI fairness and ethics operations at Facebook and Google and how things have gone wrong. On Monday, a Google employee group wrote a letter asking Congress and state lawmakers to pass legislation to protect AI ethics whistleblowers. That letter cites VentureBeat reporting about the potential policy outcomes of Google firing former Ethical AI team co-lead Timnit Gebru. It also cites research by UC Berkeley law professor Sonia Katyal, who told VentureBeat, "What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential."

'Information gap' between AI creators and policymakers needs to be resolved - report - AI News


An article posted by the World Economic Forum (WEF) has argued there is a'huge gap in understanding' between policymakers and AI creators. The report, authored by Adriana Bora, AI policy researcher and project manager at The Future Society, and David Alexandru Timis, outgoing curator at Brussels Hub, explores how to resolve accountability and trust-building issues with AI technology. Bora and Timis note there is "a need for sound mechanisms that will generate a comprehensive and collectively shared understanding of AI's development and deployment cycle." As a result, the two add, this governance "needs to be designed under continuous dialogue utilising multi-stakeholder and interdisciplinary methodologies and skills." In plain language, both sides need to speak the same language.

'This is bigger than just Timnit': How Google tried to silence a critic and ignited a movement


Timnit Gebru--a giant in the world of AI and then co-lead of Google's AI ethics team--was pushed out of her job in December. Gebru had been fighting with the company over a research paper that she'd coauthored, which explored the risks of the AI models that the search giant uses to power its core products--the models are involved in almost every English query on Google, for instance. The paper called out the potential biases (racial, gender, Western, and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors' names taken off; Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru's departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff--despite Gebru's credentials and history of groundbreaking research.