Goto

Collaborating Authors

 timnit


A Lesson from Google: Can AI Bias be Monitored Internally?

#artificialintelligence

Revolutions often have humble origins, a small group with big ideas gathering to plant seeds of disruption. So, it was in the dog days of summer in 1956, when 10 academics gathered on the campus of Dartmouth College to discuss how to make machines use language and form abstractions and concepts to solve the kinds of problems now reserved for humans. The conference led to the founding of a new field of study, artificial intelligence. Six decades hence, we are in the midst of an AI revolution that is already dramatically changing entire sectors like healthcare, transportation, education, banking, and retail. But AI is not without its critics. Elon Musk famously said that, "With artificial intelligence, we're summoning the demon." While Stephen Hawking believed the development of full artificial intelligence could spell the end of the human race. So, whose job is it to make sure that such a vision never comes to pass? Today on Cold Call, we've invited Professor Tsedal Neeley to discuss her case entitled, "Timnit Gebru: Silenced No More on AI Bias and The Harms of Large Language Models." Tsedal Neeley's work focuses on how leaders can scale their organizations by developing and implementing global and digital strategies.


A Lesson from Google: Can AI Bias be Monitored Internally?

#artificialintelligence

BRIAN KENNY: Revolutions often have humble origins, a small group with big ideas gathering to plant seeds of disruption. So, it was in the dog days of summer in 1956, when 10 academics gathered on the campus of Dartmouth College to discuss how to make machines use language and form abstractions and concepts to solve the kinds of problems now reserved for humans. The conference led to the founding of a new field of study, artificial intelligence. Six decades hence, we are in the midst of an AI revolution that is already dramatically changing entire sectors like healthcare, transportation, education, banking, and retail. But AI is not without its critics. Elon Musk famously said that, "With artificial intelligence, we're summoning the demon." While Stephen Hawking believed the development of full artificial intelligence could spell the end of the human race. So, whose job is it to make sure that such a vision never comes to pass? Today on Cold Call, we've invited Professor Tsedal Neeley to discuss her case entitled, "Timnit Gebru: Silenced No More on AI Bias and The Harms of Large Language Models." Tsedal Neeley's work focuses on how leaders can scale their organizations by developing and implementing global and digital strategies.


Google made AI language the centerpiece of I/O while ignoring its troubled past at the company

#artificialintelligence

Yesterday at Google's I/O developer conference, the company outlined ambitious plans for its future built on a foundation of advanced language AI. These systems, said Google CEO Sundar Pichai, will let users find information and organize their lives by having natural conversations with computers. All you need to do is speak, and the machine will answer. But for many in the AI community, there was a notable absence in this conversation: Google's response to its own research examining the dangers of such systems. In December 2020 and February 2021, Google first fired Timnit Gebru and then Margaret Mitchell, co-leads of its Ethical AI team. The story of their departure is complex but was triggered by a paper the pair co-authored (with researchers outside Google) examining risks associated with the language models Google now presents as key to its future.


Meet The Black Women Trying to Fix AI

#artificialintelligence

It's no secret that artificial intelligence, algorithms, and big data have a problem with gender and racial bias. These systems can be biased based on who builds them, how they're developed, and how they're ultimately used. Trying to solve the problem is a community of Black data scientists, researchers, and organizations. This article highlights the Black women amongst their ranks, who are exposing algorithmic biases, empowering communities of color with data, and arguing for more diverse representation. Joy Buolamwini is a Ghanaian-American computer scientist based at MIT Media Lab.


On the Moral Collapse of AI Ethics

#artificialintelligence

I've had the good fortune to become friends with Timnit over the last several weeks as we've spent hours discussing the spread of mis/disinformation and hate speech on social media in Ethiopia. Our collaboration began with a frank conversation around the limitations of the AI ethics community. I felt she sincerely engaged with the critiques I raised about the representation politics in predominantly white institutions interpolating a handful of African elites as ambassadors of the Black American experience. Out of the love I got for her and this community of computer scientists, data/tech policy analysts, academics, I feel the need to be harsh and keep it real about the moral collapse of AI Ethics. If demands for corporate transparency crystalized in the Standing with Dr. Timnit Gebru Petition defines the horizon for tech worker resistance, we are doomed.


The withering email that got an ethical AI researcher fired at Google

#artificialintelligence

Gebru, an alumni of the Stanford Artificial Intelligence Laboratory, is one of the leading voices in the ethical use of artificial intelligence. She is well-known for her work on a landmark study in 2018 that showed how facial recognition software misidentified dark-skinned women as much as 35% of the time, whereas the technology worked with near precision on white men. She has also been an outspoken critic of the lack of diversity and unequal treatment of Black workers at tech companies, particularly at Alphabet Inc.'s Google, and said she believed her dismissal was meant to send a message to the rest of Google's employees not to speak up. Platformer received the email Gebru sent; she herself did not have access to her account after Google terminated her. It is published in full below.


A leading AI ethics researcher says she's been fired from Google

MIT Technology Review

On Thursday morning, after an outpouring of support for Gebru on social media, Dean sent an internal email to Google's AI group with his account of the situation. He said that Gebru's paper "didn't meet our bar for publication" because "it ignored too much relevant research." He also said that Gebru's conditions included "revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback." "Given Timnit's role as a respected researcher and a manager in our Ethical AI team, I feel badly that Timnit has gotten to a place where she feels this way about the work we're doing," he wrote. "I know we all genuinely share Timnit's passion to make AI more equitable and inclusive."


[N] The email that got Ethical AI researcher Timnit Gebru fired

#artificialintelligence

I had stopped writing here as you may know, after all the micro and macro aggressions and harassments I received after posting my stories here (and then of course it started being moderated). Recently however, I was contributing to a document that Katherine and Daphne were writing where they were dismayed by the fact that after all this talk, this org seems to have hired 14% or so women this year. Samy has hired 39% from what I understand but he has zero incentive to do this. What I want to say is stop writing your documents because it doesn't make a difference. The DEI OKRs that we don't know where they come from (and are never met anyways), the random discussions, the "we need more mentorship" rather than "we need to stop the toxic environments that hinder us from progressing" the constant fighting and education at your cost, they don't matter.


Google's star AI ethics researcher, one of a few Black women in the field, says she was fired for a critical email

Washington Post - Technology News

In a companywide email Thursday, first published in Platformer, Dean urged employees to continue working on Google's diversity, equity and inclusion efforts. "Given Timnit's role as a respected researcher and a manager in our Ethical AI team, I feel badly that Timnit has gotten to a place where she feels this way about the work we're doing," he wrote. "I also feel badly that hundreds of you received an email just this week from Timnit telling you to stop work on critical DEI programs. I understand the frustration about the pace of progress, but we have important work ahead and we need to keep at it."