Goto

Collaborating Authors

 dair


The Stories We Govern By: AI, Risk, and the Power of Imaginaries

Oldenburg, Ninell, Papyshev, Gleb

arXiv.org Artificial Intelligence

This paper examines how competing sociotechnical imaginaries of artificial intelligence (AI) risk shape governance decisions and regulatory constraints. Drawing on concepts from science and technology studies, we analyse three dominant narrative groups: existential risk proponents, who emphasise catastrophic AGI scenarios; accelerationists, who portray AI as a transformative force to be unleashed; and critical AI scholars, who foreground present-day harms rooted in systemic inequality. Through an analysis of representative manifesto-style texts, we explore how these imaginaries differ across four dimensions: normative visions of the future, diagnoses of the present social order, views on science and technology, and perceived human agency in managing AI risks. Our findings reveal how these narratives embed distinct assumptions about risk and have the potential to progress into policy-making processes by narrowing the space for alternative governance approaches. We argue against speculative dogmatism and for moving beyond deterministic imaginaries toward regulatory strategies that are grounded in pragmatism.


Finding an Alternate Way Forward with AI

#artificialintelligence

Artificial intelligence (AI) researcher Timnit Gebru joined Google in 2018 intent on changing the system from within, sounding an alarm about the ways technology could--and already is--harming underprivileged communities. After an abrupt, contentious, and highly publicized dismissal from the company, she has embraced a new role with the founding of the Distributed AI Research Institute (DAIR), a nonprofit that seeks not just to expose potential AI harms, but to support proactive research into technologies that benefit communities, rather than working against them. We spoke to Gebru and DAIR Director of Research Alex Hanna (like Gebru, a former member of Google's Ethical AI team) about DAIR's work and its goals for the future.


Alex Hanna left Google to try to save AI's future

MIT Technology Review

It was a move that capped a dramatic period in Hanna's professional life. In late 2020, her manager, Timnit Gebru, had been fired from her position as the co-lead of the Ethical AI team after she wrote a paper questioning the ethics of large language models (including Google's). A few months later, Hanna's next manager, Meg Mitchell, was also shown the door. DAIR, which was founded by Gebru in late 2021 and is funded by various philanthropies, aims to challenge the existing understanding of AI through a community-focused, bottom-up approach to research. The group works remotely and includes teams in Berlin and South Africa.


A Lesson from Google: Can AI Bias be Monitored Internally?

#artificialintelligence

Revolutions often have humble origins, a small group with big ideas gathering to plant seeds of disruption. So, it was in the dog days of summer in 1956, when 10 academics gathered on the campus of Dartmouth College to discuss how to make machines use language and form abstractions and concepts to solve the kinds of problems now reserved for humans. The conference led to the founding of a new field of study, artificial intelligence. Six decades hence, we are in the midst of an AI revolution that is already dramatically changing entire sectors like healthcare, transportation, education, banking, and retail. But AI is not without its critics. Elon Musk famously said that, "With artificial intelligence, we're summoning the demon." While Stephen Hawking believed the development of full artificial intelligence could spell the end of the human race. So, whose job is it to make sure that such a vision never comes to pass? Today on Cold Call, we've invited Professor Tsedal Neeley to discuss her case entitled, "Timnit Gebru: Silenced No More on AI Bias and The Harms of Large Language Models." Tsedal Neeley's work focuses on how leaders can scale their organizations by developing and implementing global and digital strategies.


Building Ethical Artificial Intelligence – The Markup

#artificialintelligence

As computers get more powerful, we are increasingly using them to make predictions. The software that makes these predictions is often called artificial intelligence. It's interesting that we call it "intelligence," because other tasks we assign to computers--computing huge numbers, running complex simulations--are also things that we label as "intelligence" when humans do them. For instance, my kids are graded on their intelligence at school based on their ability to do complex mathematical calculations. When we let computers project into the future and make their own decisions about what step to take next--what chess move to make, what driving route to suggest--we seem to want to call it artificial intelligence.


Timnit Gebru and the fight to make artificial intelligence work for Africa

#artificialintelligence

The way Timnit Gebru sees it, the foundations of the future are being built now. In Silicon Valley, home to the world's biggest tech companies, the artificial intelligence (AI) revolution is already well under way. Software is being written and algorithms are being trained that will determine the shape of our lives for decades or even centuries to come. If the tech billionaires get their way, the world will run on artificial intelligence. Cars will drive themselves and computers will diagnose and cure diseases. Art, music and movies will be automatically generated.


Daring to DAIR: Distributed AI Research with Timnit Gebru - #568

#artificialintelligence

Today we're joined by friend of the show Timnit Gebru, the founder and executive director of DAIR, the Distributed Artificial Intelligence Research Institute. In our conversation with Timnit, we discuss her journey to create DAIR, their goals and some of the challenges shes faced along the way. We start is the obvious place, Timnit being "resignated" from Google after writing and publishing a paper detailing the dangers of large language models, the fallout from that paper and her firing, and the eventual founding of DAIR. We discuss the importance of the "distributed" nature of the institute, how they're going about figuring out what is in scope and out of scope for the institute's research charter, and what building an institution means to her. We also explore the importance of independent alternatives to traditional research structures, if we should be pessimistic about the impact of internal ethics and responsible AI teams in industry due to the overwhelming power they wield, examples she looks to of what not to do when building out the institute, and much much more!


Timnit Gebru, AI researcher fired by Google thinks a new law is needed

#artificialintelligence

Born to Eritrean parents in Ethiopia, Gebru spoke with The Associated Press recently about how poorly Big Tech's AI priorities -- and its AI-fueled social media platforms -- serve Africa and elsewhere. The new institute focuses on AI research from the perspective of the places and people most likely to experience its harms. She's also co-founder of the group Black in AI, which promotes Black employment and leadership in the field. And she's known for co-authoring a landmark 2018 study that found racial and gender bias in facial recognition software. The interview has been edited for length and clarity.


La veille de la cybersécurité

#artificialintelligence

Timnit Gebru, the founder of the Distributed Artificial Intelligence Research Institute, called for scholars to employ more ethical approaches in artificial intelligence research at an event hosted Tuesday by the Radcliffe Institute for Advanced Study. During her talk, part of a Radcliffe lecture series on artificial intelligence, Gebru shared her vision for interdisciplinary AI and her calls for changes in academia. Gebru delivered a virtual talk to attendees, followed by a Q&A with Himabindu "Hima" Lakkaraju, an assistant professor at Harvard. The conversation began with an introduction to DAIR, which works with researchers from different backgrounds to conduct research around the world. Gebru said traditional research practices can often be "exploitative."


Timnit Gebru is part of a wave of Black women working to change AI

#artificialintelligence

A computer scientist who said she was pushed out of her job at Google in December 2020 has marked the one-year anniversary of her ouster with a new research institute aiming to support the creation of ethical artificial intelligence. Timnit Gebru, a known advocate for diversity in AI, announced the launch of the Distributed Artificial Intelligence Research Institute, or DAIR. Its website describes it as "a space for independent, community-rooted AI research free from Big Tech's pervasive influence." Part of how Gebru imagines creating such research is by moving away from the Silicon Valley ethos of "move fast and break things" -- which was Facebook's internal motto, coined by Mark Zuckerberg, until 2014 -- to instead take a more deliberate approach to creating new technologies that serve marginalized communities. That includes recognizing and mitigating technologies' potentials for harm from the beginning of their creation process, rather than after they've already caused damage to those communities, Gebru told NBC News.