Goto

Collaborating Authors

 whiteness



Elon, me and 20 million views: A conversation with Grok

Al Jazeera

"Didn't know you were famous," the rapper Juliani, an old friend and musical collaborator, texted me from his studio in Nairobi. I didn't have a clue what he was referring to, but then he forwarded me the link to a tweet by Elon Musk that included a screenshot of a 2019 Al Jazeera column of mine, " Abolishing whiteness has never been more urgent ." The original post was circulating on Twitter/X, courtesy of a white nationalist poster who obviously wasn't too happy with the headline. Neither was Elon, who retweeted it with the comment, "It's not okay to say this about any group!" Although the post was only a few hours old, it already had five million views.



Controlling Steering with Energy-Based Models

Balesni, Mikita, Tampuu, Ardi, Matiisen, Tambet

arXiv.org Artificial Intelligence

So-called implicit behavioral cloning with energy-based models has shown promising results in robotic manipulation tasks. We tested if the method's advantages carry on to controlling the steering of a real self-driving car with an end-to-end driving model. We performed an extensive comparison of the implicit behavioral cloning approach with explicit baseline approaches, all sharing the same neural network backbone architecture. Baseline explicit models were trained with regression (MAE) loss, classification loss (softmax and cross-entropy on a discretization), or as mixture density networks (MDN). While models using the energy-based formulation performed comparably to baseline approaches in terms of safety driver interventions, they had a higher whiteness measure, indicating higher jerk. To alleviate this, we show two methods that can be used to improve the smoothness of steering. We confirmed that energy-based models handle multimodalities slightly better than simple regression, but this did not translate to significantly better driving ability. We argue that the steering-only road-following task has too few multimodalities to benefit from energy-based models. This shows that applying implicit behavioral cloning to real-world tasks can be challenging, and further investigation is needed to bring out the theoretical advantages of energy-based models.


A Sign That Spells: DALL-E 2, Invisual Images and The Racial Politics of Feature Space

Offert, Fabian, Phan, Thao

arXiv.org Artificial Intelligence

In this paper, we examine how generative machine learning systems produce a new politics of visual culture. We focus on DALL-E 2 and related models as an emergent approach to image-making that operates through the cultural techniques of feature extraction and semantic compression. These techniques, we argue, are inhuman, invisual, and opaque, yet are still caught in a paradox that is ironically all too human: the consistent reproduction of whiteness as a latent feature of dominant visual culture. We use Open AI's failed efforts to 'debias' their system as a critical opening to interrogate how systems like DALL-E 2 dissolve and reconstitute politically salient human concepts like race. This example vividly illustrates the stakes of this moment of transformation, when so-called foundation models reconfigure the boundaries of visual culture and when 'doing' anti-racism means deploying quick technical fixes to mitigate personal discomfort, or more importantly, potential commercial loss.


AZ-whiteness test: a test for uncorrelated noise on spatio-temporal graphs

Zambon, Daniele, Alippi, Cesare

arXiv.org Machine Learning

We present the first whiteness test for graphs, i.e., a whiteness test for multivariate time series associated with the nodes of a dynamic graph. The statistical test aims at finding serial dependencies among close-in-time observations, as well as spatial dependencies among neighboring observations given the underlying graph. The proposed test is a spatio-temporal extension of traditional tests from the system identification literature and finds applications in similar, yet more general, application scenarios involving graph signals. The AZ-test is versatile, allowing the underlying graph to be dynamic, changing in topology and set of nodes, and weighted, thus accounting for connections of different strength, as is the case in many application scenarios like transportation networks and sensor grids. The asymptotic distribution -- as the number of graph edges or temporal observations increases -- is known, and does not assume identically distributed data. We validate the practical value of the test on both synthetic and real-world problems, and show how the test can be employed to assess the quality of spatio-temporal forecasting models by analyzing the prediction residuals appended to the graphs stream.


Medical photography is failing patients with darker skin

#artificialintelligence

But Jenna Lester, a dermatologist at the University of California San Francisco, was growing frustrated with the poor quality images she'd receive of her dark-skinned patients. It wasn't just a cosmetic issue -- the bad photos meant darker-skinned people weren't getting the same quality of care. So in January, Lester co-authored a paper in the British Journal of Dermatology that gives a step-by-step guide to photographing skin of color accurately in clinical settings. Lester, who herself is Black, said, "I feel like these issues and my life is constantly me saying, 'Hey, what about us?' 'What about these patients?'" Medical photographs are vital to documenting disease in textbooks and journals and training medical students.


Is Artificial Intelligence White?

#artificialintelligence

The "whiteness" of artificial intelligence (AI) removes people of colour from the way humanity thinks about its technology-enhanced future, researchers argue. University of Cambridge experts suggest current portrayals and stereotypes about AI risk creating a "racially homogenous" workforce of aspiring technologists, creating machines with bias baked into their algorithms. The scientists say cultural depictions of AI as white need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased. In their paper, "The Whiteness of AI" published in the journal, Philosophy and Technology, Leverhulme CFI Executive Director, Stephen Cave and Dr Kanta Dihal offer insights into the ways in which portrayals of AI stem from, and perpetuate, racial inequalities. Cave and Dihal cite research showing that people perceive race in AI, not only in human-like robots, but also in abstracted and disembodied AI.


The Whiteness of AI

#artificialintelligence

It is a truth little acknowledged that a machine in possession of intelligence must be white. Typing terms like "robot" or "artificial intelligence" into a search engine will yield a preponderance of stock images of white plastic humanoids. Perhaps more notable still, these machines are not only white in colour, but the more human they are made to look, the more their features are made ethnically White.Footnote 1 In this paper, we problematize the often unnoticed and unremarked-upon fact that intelligent machines are predominantly conceived and portrayed as White. We argue that this Whiteness both illuminates particularities of what (Anglophone Western) society hopes for and fears from these machines, and situates these affects within long-standing ideological structures that relate race and technology. Race and technology are two of the most powerful and important categories for understanding the world as it has developed since at least the early modern period.


Whiteness of AI erases people of color from our 'imagined futures', researchers argue

#artificialintelligence

The overwhelming'Whiteness' of artificial intelligence--from stock images and cinematic robots to the dialects of virtual assistants--removes people of colour from the way humanity thinks about its technology-enhanced future. This is according to experts at the University of Cambridge, who suggest that current portrayals and stereotypes about AI risk creating a "racially homogenous" workforce of aspiring technologists, building machines with bias baked into their algorithms. They argue that cultural depictions of AI as White need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased. The researchers, from Cambridge's Leverhulme Centre for the Future of Intelligence (CFI), say that AI, like other science fiction tropes, has always reflected the racial thinking in our society. They argue that there is a long tradition of crude racial stereotypes when it comes to extraterrestrials--from the "orientalised" alien of Ming the Merciless to the Caribbean caricature of Jar Jar Binks.