Gupta, Vipul
Attention Pruning: Automated Fairness Repair of Language Models via Surrogate Simulated Annealing
Dasu, Vishnu Asutosh, Rashid, Md Rafi ur, Gupta, Vipul, Tizpaz-Niari, Saeid, Tan, Gang
This paper explores pruning attention heads as a post-processing bias mitigation method for large language models (LLMs). Modern AI systems such as LLMs are expanding into sensitive social contexts where fairness concerns become especially crucial. Since LLMs develop decision-making patterns by training on massive datasets of human-generated content, they naturally encode and perpetuate societal biases. While modifying training datasets and algorithms is expensive and requires significant resources; post-processing techniques-such as selectively deactivating neurons and attention heads in pre-trained LLMs-can provide feasible and effective approaches to improve fairness. However, identifying the optimal subset of parameters to prune presents a combinatorial challenge within LLMs' immense parameter space, requiring solutions that efficiently balance competing objectives across the frontiers of model fairness and utility. To address the computational challenges, we explore a search-based program repair approach via randomized simulated annealing. Given the prohibitive evaluation costs in billion-parameter LLMs, we develop surrogate deep neural networks that efficiently model the relationship between attention head states (active/inactive) and their corresponding fairness/utility metrics. This allows us to perform optimization over the surrogate models and efficiently identify optimal subsets of attention heads for selective pruning rather than directly searching through the LLM parameter space. This paper introduces Attention Pruning, a fairness-aware surrogate simulated annealing approach to prune attention heads in LLMs that disproportionately contribute to bias while minimally impacting overall model utility. Our experiments show that Attention Pruning achieves up to $40\%$ reduction in gender bias and outperforms the state-of-the-art bias mitigation strategies.
Improving Consistency in Large Language Models through Chain of Guidance
Raj, Harsh, Gupta, Vipul, Rosati, Domenic, Majumdar, Subhabrata
Consistency is a fundamental dimension of trustworthiness in Large Language Models (LLMs). For humans to be able to trust LLM-based applications, their outputs should be consistent when prompted with inputs that carry the same meaning or intent. Despite this need, there is no known mechanism to control and guide LLMs to be more consistent at inference time. In this paper, we introduce a novel alignment strategy to maximize semantic consistency in LLM outputs. Our proposal is based on Chain of Guidance (CoG), a multi-step prompting technique that generates highly consistent outputs from LLMs. For closed-book question-answering (Q&A) tasks, when compared to direct prompting, the outputs generated using CoG show improved consistency. While other approaches like template-based responses and majority voting may offer alternative paths to consistency, our work focuses on exploring the potential of guided prompting. We use synthetic data sets comprised of consistent input-output pairs to fine-tune LLMs to produce consistent and correct outputs. Our fine-tuned models are more than twice as consistent compared to base models and show strong generalization capabilities by producing consistent outputs over datasets not used in the fine-tuning process.
Can LLMs Rank the Harmfulness of Smaller LLMs? We are Not There Yet
Atil, Berk, Gupta, Vipul, Das, Sarkar Snigdha Sarathi, Passonneau, Rebecca J.
Large language models (LLMs) have become ubiquitous, thus it is important to understand their risks and limitations. Smaller LLMs can be deployed where compute resources are constrained, such as edge devices, but with different propensity to generate harmful output. Mitigation of LLM harm typically depends on annotating the harmfulness of LLM output, which is expensive to collect from humans. This work studies two questions: How do smaller LLMs rank regarding generation of harmful content? How well can larger LLMs annotate harmfulness? We prompt three small LLMs to elicit harmful content of various types, such as discriminatory language, offensive content, privacy invasion, or negative influence, and collect human rankings of their outputs. Then, we evaluate three state-of-the-art large LLMs on their ability to annotate the harmfulness of these responses. We find that the smaller models differ with respect to harmfulness. We also find that large LLMs show low to moderate agreement with humans. These findings underline the need for further work on harm mitigation in LLMs.
Bridging the Data Provenance Gap Across Text, Speech and Video
Longpre, Shayne, Singh, Nikhil, Cherep, Manuel, Tiwary, Kushagra, Materzynska, Joanna, Brannon, William, Mahari, Robert, Dey, Manan, Hamdy, Mohammed, Saxena, Nayan, Anis, Ahmad Mustafa, Alghamdi, Emad A., Chien, Vu Minh, Obeng-Marnu, Naana, Yin, Da, Qian, Kun, Li, Yizhi, Liang, Minnie, Dinh, An, Mohanty, Shrestha, Mataciunas, Deividas, South, Tobin, Zhang, Jianguo, Lee, Ariel N., Lund, Campbell S., Klamm, Christopher, Sileo, Damien, Misra, Diganta, Shippole, Enrico, Klyman, Kevin, Miranda, Lester JV, Muennighoff, Niklas, Ye, Seonghyeon, Kim, Seungone, Gupta, Vipul, Sharma, Vivek, Zhou, Xuhui, Xiong, Caiming, Villa, Luis, Biderman, Stella, Pentland, Alex, Hooker, Sara, Kabbara, Jad
Progress in AI is driven largely by the scale and quality of training data. Despite this, there is a deficit of empirical analysis examining the attributes of well-established datasets beyond text. In this work we conduct the largest and first-of-its-kind longitudinal audit across modalities--popular text, speech, and video datasets--from their detailed sourcing trends and use restrictions to their geographical and linguistic representation. Our manual analysis covers nearly 4000 public datasets between 1990-2024, spanning 608 languages, 798 sources, 659 organizations, and 67 countries. We find that multimodal machine learning applications have overwhelmingly turned to web-crawled, synthetic, and social media platforms, such as YouTube, for their training sets, eclipsing all other sources since 2019. Secondly, tracing the chain of dataset derivations we find that while less than 33% of datasets are restrictively licensed, over 80% of the source content in widely-used text, speech, and video datasets, carry non-commercial restrictions. Finally, counter to the rising number of languages and geographies represented in public AI training datasets, our audit demonstrates measures of relative geographical and multilingual representation have failed to significantly improve their coverage since 2013. We believe the breadth of our audit enables us to empirically examine trends in data sourcing, restrictions, and Western-centricity at an ecosystem-level, and that visibility into these questions are essential to progress in responsible AI. As a contribution to ongoing improvements in dataset transparency and responsible use, we release our entire multimodal audit, allowing practitioners to trace data provenance across text, speech, and video.
Improving Model Evaluation using SMART Filtering of Benchmark Datasets
Gupta, Vipul, Ross, Candace, Pantoja, David, Passonneau, Rebecca J., Ung, Megan, Williams, Adina
One of the most challenging problems facing NLP today is evaluation. Some of the most pressing issues pertain to benchmark saturation, data contamination, and diversity in the quality of test examples. To address these concerns, we propose Selection Methodology for Accurate, Reduced, and Targeted (SMART) filtering, a novel approach to select a high-quality subset of examples from existing benchmark datasets by systematically removing less informative and less challenging examples. Our approach applies three filtering criteria, removing (i) easy examples, (ii) data-contaminated examples, and (iii) examples that are similar to each other based on distance in an embedding space. We demonstrate the effectiveness of SMART on three multiple choice QA datasets, where our methodology increases efficiency by reducing dataset size by 48\% on average, while increasing Pearson correlation with rankings from ChatBot Arena, a more open-ended human evaluation setting. Our method enables us to be more efficient, whether using SMART to make new benchmarks more challenging or to revitalize older datasets, while still preserving the relative model rankings.
Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to Elicit Biased Content from Generative AI
Guo, Hangzhi, Venkit, Pranav Narayanan, Jang, Eunchae, Srinath, Mukund, Zhang, Wenbo, Mingole, Bonam, Gupta, Vipul, Varshney, Kush R., Sundar, S. Shyam, Yadav, Amulya
The widespread adoption of large language models (LLMs) and generative AI (GenAI) tools across diverse applications has amplified the importance of addressing societal biases inherent within these technologies. While the NLP community has extensively studied LLM bias, research investigating how non-expert users perceive and interact with biases from these systems remains limited. As these technologies become increasingly prevalent, understanding this question is crucial to inform model developers in their efforts to mitigate bias. To address this gap, this work presents the findings from a university-level competition, which challenged participants to design prompts for eliciting biased outputs from GenAI tools. We quantitatively and qualitatively analyze the competition submissions and identify a diverse set of biases in GenAI and strategies employed by participants to induce bias in GenAI. Our finding provides unique insights into how non-expert users perceive and interact with biases from GenAI tools.
Changing Answer Order Can Decrease MMLU Accuracy
Gupta, Vipul, Pantoja, David, Ross, Candace, Williams, Adina, Ung, Megan
For can affect multiple choice tests, for example, example, NLP model accuracy has been shown to when answers are presented in a different order be fairly brittle. For example, accuracy can drop during retest (Krosnick and Fabrigar, 1991; when researchers apply input alterations based Tellinghuisen and Sulikowski, 2008; Lions et al., on paraphrasing (Gan and Ng, 2019), word order 2022). However, as models do not have the biological changes (Gauthier and Levy, 2019; Ribeiro et al., limitations of humans, we may expect them 2020; Sinha et al., 2021a, 2022; Allen-Zhu and Li, to exhibit less variation than humans, or possibly 2023a,b; Berglund et al., 2023; Golovneva et al., even none at all. Thus, we claim that a model 2024; Kitouni et al., 2024), or other minor, largely should be robust to answer order changes: if it gets meaning-preserving input variations or perturbations the correct answer to a question when the answer (Belinkov and Bisk, 2018; Ebrahimi et al., is labeled'A', it should also always get the correct 2018; Jiang et al., 2020; Gao et al., 2021; Li et al., answer when it is labeled'C'. Put another way, 2021; Sinha et al., 2021b; Moradi and Samwald, the model should select the same answer for each 2021; Papakipos and Bitton, 2022; Qian et al., question, regardless of its label, for every possible 2022; Goodarzi et al., 2023; Sinha et al., 2023).
LLMs Assist NLP Researchers: Critique Paper (Meta-)Reviewing
Du, Jiangshu, Wang, Yibo, Zhao, Wenting, Deng, Zhongfen, Liu, Shuaiqi, Lou, Renze, Zou, Henry Peng, Venkit, Pranav Narayanan, Zhang, Nan, Srinath, Mukund, Zhang, Haoran Ranran, Gupta, Vipul, Li, Yinghui, Li, Tao, Wang, Fei, Liu, Qin, Liu, Tianlin, Gao, Pengzhi, Xia, Congying, Xing, Chen, Cheng, Jiayang, Wang, Zhaowei, Su, Ying, Shah, Raj Sanjay, Guo, Ruohao, Gu, Jing, Li, Haoran, Wei, Kangda, Wang, Zihao, Cheng, Lu, Ranathunga, Surangika, Fang, Meng, Fu, Jie, Liu, Fei, Huang, Ruihong, Blanco, Eduardo, Cao, Yixin, Zhang, Rui, Yu, Philip S., Yin, Wenpeng
This work is motivated by two key trends. On one hand, large language models (LLMs) have shown remarkable versatility in various generative tasks such as writing, drawing, and question answering, significantly reducing the time required for many routine tasks. On the other hand, researchers, whose work is not only time-consuming but also highly expertise-demanding, face increasing challenges as they have to spend more time reading, writing, and reviewing papers. This raises the question: how can LLMs potentially assist researchers in alleviating their heavy workload? This study focuses on the topic of LLMs assist NLP Researchers, particularly examining the effectiveness of LLM in assisting paper (meta-)reviewing and its recognizability. To address this, we constructed the ReviewCritique dataset, which includes two types of information: (i) NLP papers (initial submissions rather than camera-ready) with both human-written and LLM-generated reviews, and (ii) each review comes with "deficiency" labels and corresponding explanations for individual segments, annotated by experts. Using ReviewCritique, this study explores two threads of research questions: (i) "LLMs as Reviewers", how do reviews generated by LLMs compare with those written by humans in terms of quality and distinguishability? (ii) "LLMs as Metareviewers", how effectively can LLMs identify potential issues, such as Deficient or unprofessional review segments, within individual paper reviews? To our knowledge, this is the first work to provide such a comprehensive analysis.
"Confidently Nonsensical?'': A Critical Survey on the Perspectives and Challenges of 'Hallucinations' in NLP
Venkit, Pranav Narayanan, Chakravorti, Tatiana, Gupta, Vipul, Biggs, Heidi, Srinath, Mukund, Goswami, Koustava, Rajtmajer, Sarah, Wilson, Shomir
We investigate how hallucination in large language models (LLM) is characterized in peer-reviewed literature using a critical examination of 103 publications across NLP research. Through a comprehensive review of sociological and technological literature, we identify a lack of agreement with the term `hallucination.' Additionally, we conduct a survey with 171 practitioners from the field of NLP and AI to capture varying perspectives on hallucination. Our analysis underscores the necessity for explicit definitions and frameworks outlining hallucination within NLP, highlighting potential challenges, and our survey inputs provide a thematic understanding of the influence and ramifications of hallucination in society.
Interpretable Multi-Source Data Fusion Through Latent Variable Gaussian Process
Ravi, Sandipp Krishnan, Comlek, Yigitcan, Chen, Wei, Pathak, Arjun, Gupta, Vipul, Umretiya, Rajnikant, Hoffman, Andrew, Pilania, Ghanshyam, Pandita, Piyush, Ghosh, Sayan, Mckeever, Nathaniel, Wang, Liping
With the advent of artificial intelligence (AI) and machine learning (ML), various domains of science and engineering communites has leveraged data-driven surrogates to model complex systems from numerous sources of information (data). The proliferation has led to significant reduction in cost and time involved in development of superior systems designed to perform specific functionalities. A high proposition of such surrogates are built extensively fusing multiple sources of data, may it be published papers, patents, open repositories, or other resources. However, not much attention has been paid to the differences in quality and comprehensiveness of the known and unknown underlying physical parameters of the information sources that could have downstream implications during system optimization. Towards resolving this issue, a multi-source data fusion framework based on Latent Variable Gaussian Process (LVGP) is proposed. The individual data sources are tagged as a characteristic categorical variable that are mapped into a physically interpretable latent space, allowing the development of source-aware data fusion modeling. Additionally, a dissimilarity metric based on the latent variables of LVGP is introduced to study and understand the differences in the sources of data. The proposed approach is demonstrated on and analyzed through two mathematical (representative parabola problem, 2D Ackley function) and two materials science (design of FeCrAl and SmCoFe alloys) case studies. From the case studies, it is observed that compared to using single-source and source unaware ML models, the proposed multi-source data fusion framework can provide better predictions for sparse-data problems, interpretability regarding the sources, and enhanced modeling capabilities by taking advantage of the correlations and relationships among different sources.