Eckman, Stephanie
Correcting Annotator Bias in Training Data: Population-Aligned Instance Replication (PAIR)
Eckman, Stephanie, Ma, Bolei, Kern, Christoph, Chew, Rob, Plank, Barbara, Kreuter, Frauke
Models trained on crowdsourced labels may not reflect broader population views when annotator pools are not representative. Since collecting representative labels is challenging, we propose Population-Aligned Instance Replication (PAIR), a method to address this bias through statistical adjustment. Using a simulation study of hate speech and offensive language detection, we create two types of annotators with different labeling tendencies and generate datasets with varying proportions of the types. Models trained on unbalanced annotator pools show poor calibration compared to those trained on representative data. However, PAIR, which duplicates labels from underrepresented annotator groups to match population proportions, significantly reduces bias without requiring new data collection. These results suggest statistical techniques from survey research can help align model training with target populations even when representative annotator pools are unavailable. We conclude with three practical recommendations for improving training data quality.
Mitigating Selection Bias with Node Pruning and Auxiliary Options
Choi, Hyeong Kyu, Xu, Weijie, Xue, Chi, Eckman, Stephanie, Reddy, Chandan K.
To mitigate this selection bias problem, previous solutions utilized debiasing methods to adjust the model's input and/or output. Our work, in contrast, investigates the model's internal representation of the selection bias. Specifically, we introduce a novel debiasing approach, Bias Node Pruning (BNP), which eliminates the linear layer parameters that contribute to the bias. Furthermore, we present Auxiliary Option Injection (AOI), a simple yet effective input modification technique for debiasing, which is compatible even with black-box LLMs. To provide a more systematic evaluation of selection bias, we review existing metrics and introduce Choice Kullback-Leibler Divergence (CKLD), which addresses the insensitivity of the commonly used metrics to imbalance in choice labels. Experiments show that our methods are robust and adaptable across various datasets when applied to three LLMs. The advent of large language models (LLMs) has revolutionized artificial intelligence applications, particularly in the domain of natural language processing. These models have demonstrated outstanding performance across a variety of use cases, including chatbots, machine translation, text generation, data annotation, etc. Their ability to answer questions with high precision has opened up new avenues for automated systems. Despite their remarkable abilities, LLMs suffer from the selection bias problem that often occurs in answering multiplechoice questions (MCQs). When selecting the answer for an MCQ, many LLMs prefer the choices in a given position (e.g., the last choice), or with a specific choice symbol (e.g., (A) or (3)) (Zheng et al., 2024; Wei et al., 2024; Pezeshkpour & Hruschka, 2024). Many previous works have attempted to explain this phenomenon and/or propose diverse ways to mitigate selection bias. While there are a few works focused on either modifying the input format (Li et al., 2023b; Robinson et al., 2023) or calibrating the output probabilities (Zheng et al., 2024; Reif Figure 1: We propose BNP and & Schwartz, 2024; Wei et al., 2024), to the best of our knowledge, AOI to reduce selection bias for no embedding or parameter-level investigation has been white-box and black-box models. Because selection bias originates from internal The CKLD metric is also proposed parameter-level computations, it is crucial to explore how the to encourage a more standardized LLM embeddings contribute to the bias in their output responses. Understanding the internal representation of selection bias can help us combat it. By scrutinizing the interaction between the internal representation and the LLM parameters, we develop a novel approach to debias the model. Specifically, we propose Bias Node Pruning (BNP), which eliminates nodes in the final linear layer that contribute to selection bias. By dropping as few as 32 out of 4096 nodes in the final layer, we can significantly reduce selection bias and improve question-answering performance.
Annotation Sensitivity: Training Data Collection Methods Affect Model Performance
Kern, Christoph, Eckman, Stephanie, Beck, Jacob, Chew, Rob, Ma, Bolei, Kreuter, Frauke
When training data are collected from human annotators, the design of the annotation instrument, the instructions given to annotators, the characteristics of the annotators, and their interactions can impact training data. This study demonstrates that design choices made when creating an annotation instrument also impact the models trained on the resulting annotations. We introduce the term annotation sensitivity to refer to the impact of annotation data collection methods on the annotations themselves and on downstream model performance and predictions. We collect annotations of hate speech and offensive language in five experimental conditions of an annotation instrument, randomly assigning annotators to conditions. We then fine-tune BERT models on each of the five resulting datasets and evaluate model performance on a holdout portion of each condition. We find considerable differences between the conditions for 1) the share of hate speech/offensive language annotations, 2) model performance, 3) model predictions, and 4) model learning curves. Our results emphasize the crucial role played by the annotation instrument which has received little attention in the machine learning literature. We call for additional research into how and why the instrument impacts the annotations to inform the development of best practices in instrument design.