Goto

Collaborating Authors

 feminism


Marissa Mayer: I Am Not a Feminist. I Am Not Neurodivergent. I Am a Software Girl

WIRED

Marissa Mayer didn't say AI is Death, destroyer of worlds or even AI needs ethical guardrails. Instead, she said it's the sun--life-giving, bright, shiny, endlessly giving. Thus, the former Google engineer and CEO of Yahoo, who has worked on artificial intelligence for 25 years, christened her startup Sunshine. It's devoted to AI-empowering family and social life with photo sharing, contact managing, and event planning. As I spoke with Mayer in Sunshine's candy-colored digs in Palo Alto, I was so stunned by her boosterism that I ended up mirroring it.


"Rejection," by Tony Tulathimutte, Reviewed: A Story Collection About People Who Just Can't Hang

The New Yorker

Not until I picked up Tony Tulathimutte's "Rejection" did I realize how fun it could be to read a book about a bunch of huge fucking losers. It sucks for them, the inept, lonely, self-obsessed, self-righteous, self-imprisoned protagonists of these linked stories, but it's a thrill for the sickos among us, the king being Tulathimutte, who gives loserdom its own rancid carnival. Tulathimutte understands the project--both his own and that of his characters--with diagnostic, comprehensive hyper-precision; as you behold his parade of marketplace failure and personal pathology, he's ten steps ahead of any reaction you could muster. Thus, you simply surrender to the sick pleasure of watching humiliating people humiliate themselves, as when a clammy self-styled feminist ally gets shut down by a girl and goes, "Grrr, friend-zoned again!" while shaking his fists at the ceiling, then creates a dating profile that includes the line "Unshakably serious about consent. These are two of the mildest ...


The Machine Ethics podcast: Good tech with Eleanor Drage and Kerry McInerney

AIHub

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This episode we're chatting with Eleanor and Kerry on good technology and if it's even possible, that technology is political, watering down regulation, the magic of AI, the value of human creativity, how Feminism, Aboriginal, and mixed race studies can help AI development, the performative nature of tech, and more… Dr Kerry McInerney (née Mackereth) is a Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where she co-leads the Global Politics of AI project on how AI is impacting international relations. She is also a Research Fellow at the AI Now Institute (a leading AI policy thinktank in New York), an AHRC/BBC New Generation Thinker (2023), one of the 100 Brilliant Women in AI Ethics (2022), and one of Computing's Rising Stars 30 (2023). Kerry is the co-editor of the collection Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines (2023, Oxford University Press), the collection The Good Robot: Why Technology Needs Feminism (2024, Bloomsbury Academic), and the co-author of the forthcoming book Reprogram: Why Big Tech is Broken and How Feminism Can Fix It (2026, Princeton University Press). Dr Eleanor Drage is a Senior Research Fellow at the University of Cambridge Centre for the Future of Intelligence, and teaches AI professionals about AI ethics on a Masters course at Cambridge.


Single man enrages girl after he asks her to pay for date: 'Feminist until it's time to split the bill'

FOX News

A single man searching for love in Miami is confused. On the one hand, he received a barrage of criticism online for asking to split the bill on a first date with a girl he met on Tinder, an online dating app. On the other hand, he thinks – with modern-day feminism strongly in place in 2024 – women want equality and all that comes with it. The single man, who goes by "Water Boy" (@TheWaterBoy) on TikTok, posted a video recording of the date on TikTok. Users posting retellings of their bad dates are commonplace on the platform.


How do we make sense of changing human social norms? Ask a bot, of course Torsten Bell

The Guardian

Last week, half the internet was experimenting with ChatGPT, a new artificial intelligence chatbot that can write text on almost any subject under the sun with only the most basic of instructions. You should have a go. Reactions so far focus on predicting the end of education (it can churn out an essay in seconds) or arguing that it's fun but irrelevant to human progress. Sceptics should note that machine learning and big data analysis is supporting social science progress. Take the debate about cultural norms, where some emphasise the persistence of views passed between generations, while others argue ideas converge between places over time.


Black Feminist Musings on Algorithmic Oppression

Hampton, Lelia Marie

arXiv.org Artificial Intelligence

This paper unapologetically reflects on the critical role that Black feminism can and should play in abolishing algorithmic oppression. Positioning algorithmic oppression in the broader field of feminist science and technology studies, I draw upon feminist philosophical critiques of science and technology and discuss histories and continuities of scientific oppression against historically marginalized people. Moreover, I examine the concepts of invisibility and hypervisibility in oppressive technologies a l\'a the canonical double bind. Furthermore, I discuss what it means to call for diversity as a solution to algorithmic violence, and I critique dialectics of the fairness, accountability, and transparency community. I end by inviting you to envision and imagine the struggle to abolish algorithmic oppression by abolishing oppressive systems and shifting algorithmic development practices, including engaging our communities in scientific processes, centering marginalized communities in design, and consensual data and algorithmic practices.


The elephant in the server room

#artificialintelligence

Suppose you would like to know mortality rates for women during childbirth, by country, around the world. One option is the WomanStats Project, the website of an academic research effort investigating the links between the security and activities of nation-states, and the security of the women who live in them. The project, founded in 2001, meets a need by patching together data from around the world. Many countries are indifferent to collecting statistics about women's lives. But even where countries try harder to gather data, there are clear challenges to arriving at useful numbers -- whether it comes to women's physical security, property rights, and government participation, among many other issues.


Boffins build AI that can detect cyber-abuse – and if you don't believe us, YOU CAN *%**#* *&**%* #** OFF

#artificialintelligence

Can machine learning help clean it up? A team of computer scientists spanning the globe think so. They've built a neural network that can seemingly classify tweets into four different categories: normal, aggressor, spam, and bully – aggressor being a deliberately harmful, derogatory, or offensive tweet; and bully being a belittling or hostile message. The aim is to create a system that can filter out aggressive and bullying tweets, delete spam, and allow normal tweets through. The boffins admit it's difficult to draw a line between so-called cyber-aggression and cyber-bullying.


Apple programs Siri to not bother its pretty little head with questions about feminism

#artificialintelligence

Apple has programmed its Siri voice assistant to avoid politically charged subjects, and deflect or duck questions that require its AI to take a stand on issues, it emerged this week. From a tranche of documents leaked by a former contract worker who evaluated Siri responses to user questions for accuracy, The Guardian obtained a set of guidelines drawn up last year to ensure Siri's responses to "sensitive" topics comes across as neutral. In keeping with these guidelines, Siri's responses were revised to endorse "equality" while avoiding the word "feminism," even if asked directly. Where once Siri responded to the question, "Are you a feminist?" The leaked guidelines reportedly state, "Siri should be guarded when dealing with potentially controversial content."


Apple made Siri deflect questions on feminism, leaked papers reveal

#artificialintelligence

An internal project to rewrite how Apple's Siri voice assistant handles "sensitive topics" such as feminism and the #MeToo movement advised developers to respond in one of three ways: "don't engage", "deflect" and finally "inform". The project saw Siri's responses explicitly rewritten to ensure that the service would say it was in favour of "equality", but never say the word feminism – even when asked direct questions about the topic. Last updated in June 2018, the guidelines are part of a large tranche of internal documents leaked to the Guardian by a former Siri "grader", one of thousands of contracted workers who were employed to check the voice assistant's responses for accuracy until Apple ended the programme last month in response to privacy concerns raised by the Guardian. In explaining why the service should deflect questions about feminism, Apple's guidelines explain that "Siri should be guarded when dealing with potentially controversial content". When questions are directed at Siri, "they can be deflected … however, care must be taken here to be neutral".