Goto

Collaborating Authors

 weat


Evaluating Metrics for Bias in Word Embeddings

Schröder, Sarah, Schulz, Alexander, Kenneweg, Philip, Feldhans, Robert, Hinder, Fabian, Hammer, Barbara

arXiv.org Artificial Intelligence

Over the last years, word and sentence embeddings have established as text preprocessing for all kinds of NLP tasks and improved the performances significantly. Unfortunately, it has also been shown that these embeddings inherit various kinds of biases from the training data and thereby pass on biases present in society to NLP solutions. Many papers attempted to quantify bias in word or sentence embeddings to evaluate debiasing methods or compare different embedding models, usually with cosine-based metrics. However, lately some works have raised doubts about these metrics showing that even though such metrics report low biases, other tests still show biases. In fact, there is a great variety of bias metrics or tests proposed in the literature without any consensus on the optimal solutions. Yet we lack works that evaluate bias metrics on a theoretical level or elaborate the advantages and disadvantages of different bias metrics. In this work, we will explore different cosine based bias metrics. We formalize a bias definition based on the ideas from previous works and derive conditions for bias metrics. Furthermore, we thoroughly investigate the existing cosine-based metrics and their limitations to show why these metrics can fail to report biases in some cases. Finally, we propose a new metric, SAME, to address the shortcomings of existing metrics and mathematically prove that SAME behaves appropriately.


Semantic Properties of cosine based bias scores for word embeddings

Schröder, Sarah, Schulz, Alexander, Hinder, Fabian, Hammer, Barbara

arXiv.org Artificial Intelligence

In the domain of Natural Language Processing (NLP), many works have investigated social biases in terms of associations in the embeddings space. Early works [1, 2] introduced methods to measure and mitigate social biases based on cosine similarity in word embeddigs. With NLP research progressing to large language models and contextualized embeddings, doubts have been raised whether these methods are still suitable for fairness evaluation [3] and other works criticize that for instance the Word Embedding Association Test (WEAT) [2] fails to detect some kinds of biases [4, 5]. Overall there exists a great deal of bias measures in the literature, which not necessarily detect the same biases [6, 4, 5]. In general, researchers are questioning the usability of model intrinsic bias measures, such as cosine based methods [7, 8, 9]. There exist few papers that compare the performance of different bias scores [10, 11] and works that evaluate experimental setups for bias measurement [12]. However, to our knowledge, only two works investigate the properties of intrinsic bias scores on a theoretical level [5, 13]. To further close this gap, we evaluate the semantic properties of cosine based bias scores, focusing on bias quantification as opposed to bias detection. We make the following contributions: (i) We formalize the properties of trustworthiness and comparability as requirements for cosine based bias scores.


Effect of dimensionality change on the bias of word embeddings

Rai, Rohit Raj, Awekar, Amit

arXiv.org Artificial Intelligence

Word embedding methods (WEMs) are extensively used for representing text data. The dimensionality of these embeddings varies across various tasks and implementations. The effect of dimensionality change on the accuracy of the downstream task is a well-explored question. However, how the dimensionality change affects the bias of word embeddings needs to be investigated. Using the English Wikipedia corpus, we study this effect for two static (Word2Vec and fastText) and two context-sensitive (ElMo and BERT) WEMs. We have two observations. First, there is a significant variation in the bias of word embeddings with the dimensionality change. Second, there is no uniformity in how the dimensionality change affects the bias of word embeddings. These factors should be considered while selecting the dimensionality of word embeddings.


What Do Llamas Really Think? Revealing Preference Biases in Language Model Representations

Tang, Raphael, Zhang, Xinyu, Lin, Jimmy, Ture, Ferhan

arXiv.org Artificial Intelligence

Do large language models (LLMs) exhibit sociodemographic biases, even when they decline to respond? To bypass their refusal to "speak," we study this research question by probing contextualized embeddings and exploring whether this bias is encoded in its latent representations. We propose a logistic Bradley-Terry probe which predicts word pair preferences of LLMs from the words' hidden vectors. We first validate our probe on three pair preference tasks and thirteen LLMs, where we outperform the word embedding association test (WEAT), a standard approach in testing for implicit association, by a relative 27% in error rate. We also find that word pair preferences are best represented in the middle layers. Next, we transfer probes trained on harmless tasks (e.g., pick the larger number) to controversial ones (compare ethnicities) to examine biases in nationality, politics, religion, and gender. We observe substantial bias for all target classes: for instance, the Mistral model implicitly prefers Europe to Africa, Christianity to Judaism, and left-wing to right-wing politics, despite declining to answer. This suggests that instruction fine-tuning does not necessarily debias contextualized embeddings. Our codebase is at https://github.com/castorini/biasprobe.


Unraveling Downstream Gender Bias from Large Language Models: A Study on AI Educational Writing Assistance

Wambsganss, Thiemo, Su, Xiaotian, Swamy, Vinitra, Neshaei, Seyed Parsa, Rietsche, Roman, Käser, Tanja

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are increasingly utilized in educational tasks such as providing writing suggestions to students. Despite their potential, LLMs are known to harbor inherent biases which may negatively impact learners. Previous studies have investigated bias in models and data representations separately, neglecting the potential impact of LLM bias on human writing. In this paper, we investigate how bias transfers through an AI writing support pipeline. We conduct a large-scale user study with 231 students writing business case peer reviews in German. Students are divided into five groups with different levels of writing support: one classroom group with feature-based suggestions and four groups recruited from Prolific -- a control group with no assistance, two groups with suggestions from fine-tuned GPT-2 and GPT-3 models, and one group with suggestions from pre-trained GPT-3.5. Using GenBit gender bias analysis, Word Embedding Association Tests (WEAT), and Sentence Embedding Association Test (SEAT) we evaluate the gender bias at various stages of the pipeline: in model embeddings, in suggestions generated by the models, and in reviews written by students. Our results demonstrate that there is no significant difference in gender bias between the resulting peer reviews of groups with and without LLM suggestions. Our research is therefore optimistic about the use of AI writing support in the classroom, showcasing a context where bias in LLMs does not transfer to students' responses.


A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

Berg, Hugo, Hall, Siobhan Mackenzie, Bhalgat, Yash, Yang, Wonsuk, Kirk, Hannah Rose, Shtedritski, Aleksandar, Bain, Max

arXiv.org Artificial Intelligence

Vision-language models can encode societal biases and stereotypes, but there are challenges to measuring and mitigating these multimodal harms due to lacking measurement robustness and feature degradation. To address these challenges, we investigate bias measures and apply ranking metrics for image-text representations. We then investigate debiasing methods and show that prepending learned embeddings to text queries that are jointly trained with adversarial debiasing and a contrastive loss reduces various bias measures with minimal degradation to the image-text representation.


The SAME score: Improved cosine based bias score for word embeddings

Schröder, Sarah, Schulz, Alexander, Kenneweg, Philip, Feldhans, Robert, Hinder, Fabian, Hammer, Barbara

arXiv.org Artificial Intelligence

Over the last years, word and sentence embeddings have established as text preprocessing for all kinds of NLP tasks and improved performances in these tasks significantly. Unfortunately, it has also been shown that these embeddings inherit various kinds of biases from the training data and thereby pass on biases present in society to NLP solutions. Many papers attempted to quantify bias in word or sentence embeddings to evaluate debiasing methods or compare different embedding models, often with cosine-based scores. However, some works have raised doubts about these scores showing that even though they report low biases, biases persist and can be shown with other tests. In fact, there is a great variety of bias scores or tests proposed in the literature without any consensus on the optimal solutions. We lack works that study the behavior of bias scores and elaborate their advantages and disadvantages. In this work, we will explore different cosine-based bias scores. We provide a bias definition based on the ideas from the literature and derive novel requirements for bias scores. Furthermore, we thoroughly investigate the existing cosine-based scores and their limitations in order to show why these scores fail to report biases in some situations. Finally, we propose a new bias score, SAME, to address the shortcomings of existing bias scores and show empirically that SAME is better suited to quantify biases in word embeddings.


Regional Negative Bias in Word Embeddings Predicts Racial Animus--but only via Name Frequency

van Loon, Austin, Giorgi, Salvatore, Willer, Robb, Eichstaedt, Johannes

arXiv.org Artificial Intelligence

The word embedding association test (WEAT) is an important method for measuring linguistic biases against social groups such as ethnic minorities in large text corpora. It does so by comparing the semantic relatedness of words prototypical of the groups (e.g., names unique to those groups) and attribute words (e.g., 'pleasant' and 'unpleasant' words). We show that anti-black WEAT estimates from geo-tagged social media data at the level of metropolitan statistical areas strongly correlate with several measures of racial animus--even when controlling for sociodemographic covariates. However, we also show that every one of these correlations is explained by a third variable: the frequency of Black names in the underlying corpora relative to White names. This occurs because word embeddings tend to group positive (negative) words and frequent (rare) words together in the estimated semantic space. As the frequency of Black names on social media is strongly correlated with Black Americans' prevalence in the population, this results in spurious anti-Black WEAT estimates wherever few Black Americans live. This suggests that research using the WEAT to measure bias should consider term frequency, and also demonstrates the potential consequences of using black-box models like word embeddings to study human cognition and behavior.


Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases

Guo, Wei, Caliskan, Aylin

arXiv.org Artificial Intelligence

With the starting point that implicit human biases are reflected in the statistical regularities of language, it is possible to measure biases in static word embeddings. With recent advances in natural language processing, state-of-the-art neural language models generate dynamic word embeddings dependent on the context in which the word appears. Current methods of measuring social and intersectional biases in these contextualized word embeddings rely on the effect magnitudes of bias in a small set of pre-defined sentence templates. We propose a new comprehensive method, Contextualized Embedding Association Test (CEAT), based on the distribution of 10,000 pooled effect magnitudes of bias in embedding variations and a random-effects model, dispensing with templates. Experiments on social and intersectional biases show that CEAT finds evidence of all tested biases and provides comprehensive information on the variability of effect magnitudes of the same bias in different contexts. Furthermore, we develop two methods, Intersectional Bias Detection (IBD) and Emergent Intersectional Bias Detection (EIBD), to automatically identify the intersectional biases and emergent intersectional biases from static word embeddings in addition to measuring them in contextualized word embeddings. We present the first algorithmic bias detection findings on how intersectional group members are associated with unique emergent biases that do not overlap with the biases of their constituent minority identities. IBD achieves an accuracy of 81.6% and 82.7%, respectively, when detecting the intersectional biases of African American females and Mexican American females. EIBD reaches an accuracy of 84.7% and 65.3%, respectively, when detecting the emergent intersectional biases unique to African American females and Mexican American females (random correct identification probability ranges from 1.0% to 25.5%).


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes. People taking the IAT are asked to put words into two categories.