Goto

Collaborating Authors

 racial inequality


The Race-Science Blogger Cited by The New York Times

The Atlantic - Technology

Lasker, the Times explained, was the "intermediary" who tipped off the publication about Mamdani's application, which was included in a larger hack of Columbia's computer systems. After the Times published its story, Lasker celebrated on X. "I break-uh dah news," he wrote to his more than 260,000 followers. On both X and Substack, where he also has a large following, Lasker is best-known for compiling charts on the "Black-White IQ gap" and otherwise linking race to real-world outcomes. He seems convinced that any differences are the result of biology, and has shot down other possible explanations. He has suggested that crime is genetic.


Emotional AI Is No Substitute for Empathy

WIRED

In 2023, emotional AI--technology that can sense and interact with human emotions--will become one of the dominant applications of machine learning. For instance, Hume AI, founded by Alan Cowen, a former Google researcher, is developing tools to measure emotions from verbal, facial, and vocal expressions. Swedish company Smart Eyes recently acquired Affectiva, the MIT Media Lab spinoff that developed the SoundNet neural network, an algorithm that classifies emotions such as anger from audio samples in less than 1.2 seconds. Even the video platform Zoom is introducing Zoom IQ, a feature that will soon provide users with real-time analysis of emotions and engagement during a virtual meeting. In 2023, tech companies will be releasing advanced chatbots that can closely mimic human emotions to create more empathetic connections with users across banking, education, and health care.


Is Artificial Intelligence White?

#artificialintelligence

The "whiteness" of artificial intelligence (AI) removes people of colour from the way humanity thinks about its technology-enhanced future, researchers argue. University of Cambridge experts suggest current portrayals and stereotypes about AI risk creating a "racially homogenous" workforce of aspiring technologists, creating machines with bias baked into their algorithms. The scientists say cultural depictions of AI as white need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased. In their paper, "The Whiteness of AI" published in the journal, Philosophy and Technology, Leverhulme CFI Executive Director, Stephen Cave and Dr Kanta Dihal offer insights into the ways in which portrayals of AI stem from, and perpetuate, racial inequalities. Cave and Dihal cite research showing that people perceive race in AI, not only in human-like robots, but also in abstracted and disembodied AI.


Biased AI 'could worsen racial inequality'

#artificialintelligence

The "whiteness" of artificial intelligence (AI) removes people of colour from the way humanity thinks about its technology-enhanced future, researchers argue. University of Cambridge experts suggest current portrayals and stereotypes about AI risk creating a "racially homogenous" workforce of aspiring technologists, creating machines with bias baked into their algorithms. The scientists say cultural depictions of AI as white need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased. According to the researchers from Cambridge's Leverhulme Centre for the Future of Intelligence (CFI), like other science fiction tropes, AI has always reflected racial thinking in society. In the study published in the Philosophy and Technology journal, they argue there is a long tradition of racial stereotypes when it comes to extraterrestrials – from the "orientalised" alien of Ming the Merciless to the Caribbean caricature of Jar Jar Binks.


'White' artificial intelligence risks exacerbating racial inequality, study suggests

#artificialintelligence

The "whiteness" of artificial intelligence (AI) risks a "racially homogenous" workforce as humans create machines skewed by their biases, a study suggests. The University of Cambridge study examined AI in society, including in films, Google searches, stock images and robot voices. Researchers suggested machines have distinct racial identities and this perpetuates "real world" racial stereotypes. Non-abstract AI in internet search engine results usually had either Caucasian features or were the colour white, according to the researchers. Most virtual voices in devices talked in "standard white middle-class English" as "ideas of adding black dialects have been dismissed as too controversial or outside the target market," the study concluded.


Trump vs the NFL: AI Insight into Player Protests - UNANIMOUS A.I.

#artificialintelligence

In a week where North Korea insisted that America had declared war and Puerto Rico suffered one of the worst natural disasters in its history, headlines were nonetheless dominated by a war of words between Donald Trump and the National Football League. Speaking in Alabama, the President declared that he would like to see NFL owners whose players knelt during the national anthem to "get that son of a b*tch off the field right now. Trump's comments insisting that players be compelled to stand during the national anthem put a spotlight a handful of NFL players who continued the protest initiated last year by former 49ers quarterback Colin Kaepernick. In response to Trump's comments, every NFL team – and nearly every owner – offered some version of protest in Week 3. The controversy around the NFL protests and Trump's comments raised many questions about the nature of peaceful protest, what the national anthem represents and, what rights are protected by the First Amendment. These are thorny, complicated questions, and researchers at Unanimous AI sought to untangle them by forming a swarm of thirty American voters inside our Swarm AI platform.