Goto

Collaborating Authors

New York City proposal would allow adults to choose gender 'X' on birth certificates

FOX News

New York City Mayor Bill de Blasio delivers remarks at his 2018 Inaugural Ceremony at City Hall in Manhattan, New York, U.S., January 1, 2018. A New York City proposal would allow people born there the option to choose a third gender on their city birth certificates. Democratic Mayor Bill de Blasio and City Council Speaker Corey Johnson said the new category of "X" would be available under the new plan for those who don't identify as either male or female. The proposal is expected to be heard on June 5 at a meeting of the Board of Health. Another hearing will be held in July and then a vote in September, if approved by the board.


MediaPsych Minute #25 - Facial Recognition & Gender

#artificialintelligence

Sign in to report inappropriate content. How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis and Image Labeling Services.


Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

Neural Information Processing Systems

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.


Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

arXiv.org Machine Learning

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.


Debiasing Embeddings for Reduced Gender Bias in Text Classification

arXiv.org Machine Learning

We investigate how this bias affects downstream classification tasks, using the case study of occupation classification (De-Arteaga et al., 2019). We show that traditional techniques for debiasing embeddings can actually worsen the bias of the downstream classifier by providing a less noisy channel for communicating gender information. With a relatively minor adjustment, however, we show how these same techniques can be used to simultaneously reduce bias and maintain high classification accuracy.