dihal
Just Nine Out Of 116 AI Professionals In Key Films Are Women, Study Finds - cyberpogo
Report says pattern seen in films such as Ex Machina risks contributing to lack of women in tech. A relentless stream of movies, from Iron Man to Ex Machina, has helped entrench systemic gender inequality in the artificial intelligence industry by portraying AI researchers almost exclusively as men, a study has found. The overwhelming predominance of men as leading AI researchers in movies has shaped public perceptions of the industry, the authors say, and risks contributing to a dramatic lack of women in the tech workforce. Beyond the impact on gender balance, the study raises concerns about the knock-on effects of products that favour male users because they are developed by what the former Microsoft employee Margaret Mitchell called "a sea of dudes". "Given that male engineers have repeatedly been shown to engineer products that are most suitable for and adapted to male users, employing more women is essential for addressing the encoding of bias and pejorative stereotypes into AI technologies," the report's authors write.
- Media > Film (0.31)
- Leisure & Entertainment (0.31)
Just nine out of 116 AI professionals in films are women, study finds
A relentless stream of movies, from Iron Man to Ex Machina, has helped entrench systemic gender inequality in the artificial intelligence industry by portraying AI researchers almost exclusively as men, a study has found. The overwhelming predominance of men as leading AI researchers in movies has shaped public perceptions of the industry, the authors say, and risks contributing to a dramatic lack of women in the tech workforce. Beyond the impact on gender balance, the study raises concerns about the knock-on effects of products that favour male users because they are developed by what the former Microsoft employee Margaret Mitchell called "a sea of dudes". "Given that male engineers have repeatedly been shown to engineer products that are most suitable for and adapted to male users, employing more women is essential for addressing the encoding of bias and pejorative stereotypes into AI technologies," the report's authors write. Researchers at the University of Cambridge reviewed more than 1,400 films released between 1920 and 2020 and whittled them down to the 142 most influential movies featuring artificial intelligence.
- Media > Film (0.31)
- Leisure & Entertainment (0.31)
What Will It Take to Decolonize Artificial Intelligence? - NEO.LIFE
There's a joke in Silicon Valley about how AI was developed: Privileged coders were building machine learning algorithms to replace their own doting parents with apps that deliver their meals, drive them to work, automate their shopping, manage their schedules, and tuck them in at bedtime. As whimsical as that may sound, AI-driven services often target a demographic that mirrors its creators: white, male workers with little time and more disposable income than they know what to do with. "People living in very different circumstances have very different needs and wants that may or may not be helped by this technology," says Kanta Dihal at the University of Cambridge's Leverhulme Centre for the Future of Intelligence in England. She is an expert in an emerging effort to decolonize AI by promoting an intersectional definition of intelligent machines that is created for and relevant to a diverse population. Such a shift requires not only diversifying Silicon Valley, but the understanding of AI's potential, who it stands to help, and how people want to be helped.
- North America > United States > California (0.46)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.26)
- North America > United States > Michigan (0.05)
- (2 more...)
- Health & Medicine (0.97)
- Information Technology (0.91)
Is Artificial Intelligence White?
The "whiteness" of artificial intelligence (AI) removes people of colour from the way humanity thinks about its technology-enhanced future, researchers argue. University of Cambridge experts suggest current portrayals and stereotypes about AI risk creating a "racially homogenous" workforce of aspiring technologists, creating machines with bias baked into their algorithms. The scientists say cultural depictions of AI as white need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased. In their paper, "The Whiteness of AI" published in the journal, Philosophy and Technology, Leverhulme CFI Executive Director, Stephen Cave and Dr Kanta Dihal offer insights into the ways in which portrayals of AI stem from, and perpetuate, racial inequalities. Cave and Dihal cite research showing that people perceive race in AI, not only in human-like robots, but also in abstracted and disembodied AI.
Whiteness of AI erases people of color from our 'imagined futures', researchers argue
The overwhelming'Whiteness' of artificial intelligence--from stock images and cinematic robots to the dialects of virtual assistants--removes people of colour from the way humanity thinks about its technology-enhanced future. This is according to experts at the University of Cambridge, who suggest that current portrayals and stereotypes about AI risk creating a "racially homogenous" workforce of aspiring technologists, building machines with bias baked into their algorithms. They argue that cultural depictions of AI as White need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased. The researchers, from Cambridge's Leverhulme Centre for the Future of Intelligence (CFI), say that AI, like other science fiction tropes, has always reflected the racial thinking in our society. They argue that there is a long tradition of crude racial stereotypes when it comes to extraterrestrials--from the "orientalised" alien of Ming the Merciless to the Caribbean caricature of Jar Jar Binks.
'White' artificial intelligence risks exacerbating racial inequality, study suggests
The "whiteness" of artificial intelligence (AI) risks a "racially homogenous" workforce as humans create machines skewed by their biases, a study suggests. The University of Cambridge study examined AI in society, including in films, Google searches, stock images and robot voices. Researchers suggested machines have distinct racial identities and this perpetuates "real world" racial stereotypes. Non-abstract AI in internet search engine results usually had either Caucasian features or were the colour white, according to the researchers. Most virtual voices in devices talked in "standard white middle-class English" as "ideas of adding black dialects have been dismissed as too controversial or outside the target market," the study concluded.
- Information Technology (0.38)
- Media (0.34)
- Leisure & Entertainment (0.34)
'Racist' artificial intelligence is 'painting world white'
Dr Kanta Dihal, who leads the centre's decolonising artificial intelligence initiative, said: "Given that society has, for centuries, promoted the association of intelligence with white Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine, it imagines a white machine. People trust AI to make decisions. Cultural depictions foster the idea that AI is less fallible than humans. "In cases where these systems are racialised as white, that could have dangerous consequences for humans that are not." The experts looked at recent research from a range of fields, including human-computer interaction and critical race theory, to demonstrate that machines can be racialised, and that this perpetuates "real world" racial biases. This includes work on how robots are seen to have distinct racial identities, with black robots receiving more online abuse, and a study showing people feel closer to virtual agents when they perceive shared racial identity. Dr Dihal said: "One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard white middle-class English.