Goto

Collaborating Authors

 buolamwini


The Hermeneutic Turn of AI: Are Machines Capable of Interpreting?

Demichelis, Remy

arXiv.org Artificial Intelligence

This article aims to demonstrate how the approach to computing is being disrupted by deep learning (artificial neural networks), not only in terms of techniques but also in our interactions with machines. It also addresses the philosophical tradition of hermeneutics (Don Ihde, Wilhelm Dilthey) to highlight a parallel with this movement and to demystify the idea of human-like AI.


People shouldn't pay such a high price for calling out AI harms

MIT Technology Review

The G7 has just agreed a (voluntary) code of conduct that AI companies should abide by, as governments seek to minimize the harms and risks created by AI systems. And later this week, the UK will be full of AI movers and shakers attending the government's AI Safety Summit, an effort to come up with global rules on AI safety. In all, these events suggest that the narrative pushed by Silicon Valley about the "existential risk" posed by AI seems to be increasingly dominant in public discourse. This is concerning, because focusing on fixing hypothetical harms that may emerge in the future takes attention from the very real harms AI is causing today. "Existing AI systems that cause demonstrated harms are more dangerous than hypothetical'sentient' AI systems because they are real," writes Joy Buolamwini, a renowned AI researcher and activist, in her new memoir Unmasking AI: My Mission to Protect What Is Human in a World of Machines.


The Download: Joy Buolamwini on AI, and Meta's beauty filter lawsuit

MIT Technology Review

AI researcher and activist Joy Buolamwini is best known for a pioneering paper she co-wrote with Timnit Gebru in 2017 which exposed how commercial facial recognition systems often failed to recognize the faces of Black and brown people, especially Black women. Her research and advocacy led companies such as Google, IBM, and Microsoft to improve their software so it would be less biased and back away from selling their technology to law enforcement. Now, Buolamwini has a new target in sight. She is calling for a radical rethink of how AI systems are built. Buolamwini tells MIT Technology Review that, amid the current AI hype cycle, she sees a very real risk of letting technology companies pen the rules that apply to them-- repeating the very mistake that has previously allowed biased and oppressive technology to thrive.


Joy Buolamwini: "We're giving AI companies a free pass"

MIT Technology Review

I can tell Buolamwini finds the cover amusing. She takes a picture of it. Times have changed a lot since 1961. In her new memoir, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Buolamwini shares her life story. In many ways she embodies how far tech has come since then, and how much further it still needs to go.


Meet the artists reclaiming AI from big tech – with the help of cats, bees and drag queens

The Guardian

When I visited the Victoria and Albert Museum in London in early June, a fabulous drag cabaret was in full swing. Across seven small screens and a large wall projection, a rotating cast of performers in an array of bold looks danced and lip-synced their hearts out to banger after banger. Highlights included Freedom! 90 by George Michael, Five Years by David Bowie and Beyoncé's Sweet Dreams. Then the whole thing started again. But this wasn't just a video installation running on a loop: it was an elaborately engineered deepfake.


Why artificial intelligence needs to be on your mind in 2023

#artificialintelligence

The best advice I can give you for 2023 is to familiarize yourself with the concept of "artificial intelligence" and its impact on our everyday lives. And in the wrong hands, this technology can wreak havoc on society. To me, social media platforms have been the clearest example of this. Twitter and Facebook feeds, powered by artificial intelligence and controlled by some of the world's wealthiest people, have bombarded users with politically opportunistic conspiracy theories, misinformation, and hatred. These days, I tend to see the misery evoked by social platforms as an opportunity to highlight the potential dangers of artificial intelligence.


Timnit Gebru is part of a wave of Black women working to change AI

#artificialintelligence

A computer scientist who said she was pushed out of her job at Google in December 2020 has marked the one-year anniversary of her ouster with a new research institute aiming to support the creation of ethical artificial intelligence. Timnit Gebru, a known advocate for diversity in AI, announced the launch of the Distributed Artificial Intelligence Research Institute, or DAIR. Its website describes it as "a space for independent, community-rooted AI research free from Big Tech's pervasive influence." Part of how Gebru imagines creating such research is by moving away from the Silicon Valley ethos of "move fast and break things" -- which was Facebook's internal motto, coined by Mark Zuckerberg, until 2014 -- to instead take a more deliberate approach to creating new technologies that serve marginalized communities. That includes recognizing and mitigating technologies' potentials for harm from the beginning of their creation process, rather than after they've already caused damage to those communities, Gebru told NBC News.


AI Systems Don't Recognize People With Darker Skin Tones. That's a Major Problem.

#artificialintelligence

Sight is a miracle-- the relationship of reflection, refraction, and messages decoded by nerves within the brain. When you look at an object, you're staring at a reflection of light that enters your cornea in wavelengths. As it enters the cornea, the light is refracted, or bent, toward the thin, filmy crystalline lens that further refracts the light. The lens is a fine-tuner: it focuses the light more directly at the retina, forming a smaller, more focused beam. At the retina, the light stimulates photoreceptor cells called rods and cones.


Bias in Artificial Intelligence

#artificialintelligence

One of the more startling and instructive documentaries of the recent past is 2020's Coded Bias, which explores a thorny dilemma: in modern society, artificial-intelligence systems increasingly govern and surveil people's lives--algorithms now routinely make decisions about health care, housing, insurance, education, employment, banking, and policing--yet racial and gender biases are deeply embedded in many of these AI systems (for more background, read "Artificial Intelligence and Ethics," January-February 2019, page 44). The film, which premiered at Sundance and is now streaming on Netflix, begins with MIT Media Lab researcher and MIT doctoral candidate Joy Buolamwini recounting an experience from her first semester there in 2015: working on an art project that used AI facial-recognition software, she was confused at first when the computer didn't seem to register her face. During a striking moment early in the documentary, Buolamwini, who is African American, demonstrates the problem: holding a white mask over her own face, she turns toward her computer, which trills and lights up in response; when she lowers the mask, the computer sits eerily silent. The documentary presents a damning portrait of AI's flaws and the efforts under way to improve them, weaving together research and interviews of those who study the field, including several with Harvard connections: Berkman Klein faculty associate Zeynep Tufekci, former Nieman visiting fellow Amy Webb, data scientist Cathy O'Neil, Ph.D. '99, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016). Buolamwini herself is a former Adams House tutor (and performed her spoken-word poem, "AI, Ain't I A Woman?" at a Harvard conference in 2019).


Here Come the Robot Nurses

#artificialintelligence

The pandemic increased the demand and possibility of automating care, but doing so may deliver racist stereotypes and unemployment for women of color. At the height of the COVID-19 pandemic, Awakening Health Ltd. (AHL), a joint venture between two robotics companies, SingularityNET(SNET) and Hanson Robotics, introduced Grace, the first medical robot to have a lifelike human appearance. Grace provides acute medical and elder care by engaging patients in therapeutic interactions, cognitive stimulation, and gathering and managing patient data. By the end of 2021, Hanson Robotics hopes to be able to mass produce a robot named Sophia into one of its newest units--Grace--for the global market. What does it mean to take care of another human being?