Meet Q, The Gender-Neutral Voice Assistant

NPR Technology

For most people who talk to our technology -- whether it's Amazon's Alexa, Apple Siri or the Google Assistant -- the voice that talks back sounds female. Some people do choose to hear a male voice. Now, researchers have unveiled a new gender-neutral option: Q. "One of our big goals with Q was to contribute to a global conversation about gender, and about gender and technology and ethics, and how to be inclusive for people that identify in all sorts of different ways," says Julie Carpenter, an expert in human behavior and emerging technologies who worked on developing Project Q. The voice of Q was developed by a team of researchers, sound designers and linguists in conjunction with the organizers of Copenhagen Pride week, technology leaders in an initiative called Equal AI and others. They first recorded dozens of voices of people -- those who identify as male, female, transgender or nonbinary.


It's Time to Talk About Robot Gender Stereotypes

WIRED

Robots are the most powerful blank slate humans have ever created. A robot is a mirror held up not just to its creator, but to our whole species: What we make of the machine reflects what we are. That also means we have the very real opportunity to screw up robots by infusing them with exaggerated, overly simplified gender stereotypes. "I think of it more as a funhouse mirror," says Julie Carpenter, who studies human-robot interaction. "It's very distorted, especially right now when we're still being introduced to the idea of robots, especially real humanoid robots that exist in the world outside of science fiction."


Capital One launches Eno, a gender neutral AI assistant

Daily Mail - Science & tech

In a world of female chatbots, one program has dared to refer to itself as'binary'. Named Eno, the gender-neutral virtual assistant was created by Capital One to help the bank's customers'manage their money by texting in a conversational way'. The robot is powered with artificial intelligence, allowing it to understand natural language, and when asked if it is a male or female, it responds'binary'. In a world of female chatbots, one program dares to refer to itself as'binary'. Named Eno, the gender-neutral virtual assistant was created by Capital One Financial Corp to help the bank's customers'manage their money by texting in a conversational way' Capital One Financial Corp has unveiled a chatbot to'help the bank's customers'manage their money by texting in a conversational way'.


Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

arXiv.org Machine Learning

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.


Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

Neural Information Processing Systems

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.