Goto

Collaborating Authors

 technoscience


Enactive Artificial Intelligence: Subverting Gender Norms in Robot-Human Interaction

Hipolito, Ines, Winkle, Katie, Lie, Merete

arXiv.org Artificial Intelligence

This paper introduces Enactive Artificial Intelligence (eAI) as an intersectional gender-inclusive stance towards AI. AI design is an enacted human sociocultural practice that reflects human culture and values. Unrepresentative AI design could lead to social marginalisation. Section 1, drawing from radical enactivism, outlines embodied cultural practices. In Section 2, explores how intersectional gender intertwines with technoscience as a sociocultural practice. Section 3 focuses on subverting gender norms in the specific case of Robot-Human Interaction in AI. Finally, Section 4 identifies four vectors of ethics: explainability, fairness, transparency, and auditability for adopting an intersectionality-inclusive stance in developing gender-inclusive AI and subverting existing gender norms in robot design.


Frankenstein's warning: the too-familiar hubris of today's technoscience

The Guardian

Can we imagine a scenario in which the different anxieties aroused by George Romero's horror film Night of the Living Dead and Stanley Kubrick's sci-fi dystopia 2001: A Space Odyssey merge? How might a monster that combined our fear of becoming something less than human with our fear of increasingly "intelligent" machines appear to us and what might it say? There is one work – of both horror and science fiction – that imagines such a monster. Published almost exactly 150 years before Romero and Kubrick released their movies, it is a book in which physical deformity and technological mutiny coalesce, creating a monster that is both a zombie and AI, or something in between the two. A gothic fiction, it is also described by some literary historians as the first science-fiction novel.


Hitting the Books: Is the hunt for technological supremacy harming our collective humanity?

Engadget

Stand aside humanity, you're holding up the progress. We've passed the point of usefulness for Homo sapiens, now is the dawning of the Homo Faber era. The idea that "I think therefore I am" has become quaint in this new age of builders and creators. But has our continued obsession with technology and progress actually managed to instead set back our capacity for humanity? In his new book, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, author and pioneering researcher in the field of natural language processing, Erik J Larson, investigates the efforts to build computers that process information like we do and why we're much farther away from having human-equivalent AIs than most futurists would care to admit.


What can be done about our modern-day Frankensteins?

#artificialintelligence

In 1797, at the dawn of the industrial age, Goethe wrote "The Sorcerer's Apprentice," a poem about a magician in training who, through his arrogance and half-baked powers, unleashes a chain of events that he could not control.


What can be done about our modern-day Frankensteins?

#artificialintelligence

About 20 years later, a young Mary Shelley answered a dare to write a ghost story, which she shared at a small gathering at Lake Geneva. Her story would go on to be published as a novel, "Frankenstein; or, the Modern Prometheus," on Jan. 1, 1818. Both are stories about our powers to create things that take on a life of their own. Goethe's poem comes to a climax when the apprentice calls out in a panic: While the master fortunately returns just in time to cancel the treacherous spell, Shelley's tale doesn't end so nicely: Victor Frankenstein's monster goes on a murderous rampage, and his creator is unable to put a stop to the carnage. That's the question we face on the 200th anniversary of "Frankenstein," as we find ourselves grappling with the unintended consequences of our creations on Facebook, to artificial intelligence and human genetic engineering.