Goto

Collaborating Authors

 gender gap


Gendered Divides in Online Discussions about Reproductive Rights

Rao, Ashwin, Wang, Sze Yuh Nina, Lerman, Kristina

arXiv.org Artificial Intelligence

The U.S. Supreme Court's 2022 ruling in Dobbs v. Jackson Women's Health Organization marked a turning point in the national debate over reproductive rights. While the ideological divide over abortion is well documented, less is known about how gender and local sociopolitical contexts interact to shape public discourse. Drawing on nearly 10 million abortion-related posts on X (formerly T witter) from users with inferred gender, ideology and location, we show that gender significantly moderates abortion attitudes and emotional expression, particularly in conservative regions, and independently of ideology. This creates a gender gap in abortion attitudes that grows more pronounced in conservative regions. The leak of the Dobbs draft opinion further intensified online engagement, disproportionately mobilizing pro-abortion women in areas where access was under threat. These findings reveal that abortion discourse is not only ideologically polarized but also deeply structured by gender and place, highlighting the central the role of identity in shaping political expression during moments of institutional disruption. 1 Long a flashpoint in cultural and political battles, abortion debates have come to symbolize broader struggles over bodily autonomy, religious freedom, and gender equality. The 2022 Supreme Court ruling in Dobbs v. Jackson Women's Health Organization, which overturned nearly five decades of federal protections for abortion access established by Roe v. Wade, marked a seismic shift. It not only intensified existing partisan divides ( 1, 2), but also reshaped the legal and political terrain, triggering abrupt policy reversals in many states and catalyzing a realignment in the national debate over reproductive rights. A growing body of research has documented partisan cleavages in public attitudes toward reproductive rights ( 1, 3-7). However, less attention has been paid to the way in which gender and sociopolitical environment jointly shape both opinion formation and patterns of public expression. Recent surveys point to a widening gender gap in political orientation, particularly among younger voters. For example, in the 2024 U.S. presidential election, white men predominantly supported President Trump, while white women preferred Vice President Harris ( 8). Similarly, Gallup polling found a sharp increase in the share of young women identifying as politically liberal and supporting reproductive rights ( 9). While women consistently report higher support for abortion access, particularly in countries with less restrictive policy environments ( 10, 11), men, even those who identify as pro-choice, often show less engagement with the issue ( 11-13). Prior work has also documented gendered modes of engagement in online discourse around reproductive rights ( 1, 2).


Revisiting gender bias research in bibliometrics: Standardizing methodological variability using Scholarly Data Analysis (SoDA) Cards

Lee, HaeJin, Mishra, Shubhanshu, Mishra, Apratim, You, Zhiwen, Kim, Jinseok, Diesner, Jana

arXiv.org Artificial Intelligence

Gender biases in scholarly metrics remain a persistent concern, despite numerous bibliometric studies exploring their presence and absence across productivity, impact, acknowledgment, and self-citations. However, methodological inconsistencies, particularly in author name disambiguation and gender identification, limit the reliability and comparability of these studies, potentially perpetuating misperceptions and hindering effective interventions. A review of 70 relevant publications over the past 12 years reveals a wide range of approaches, from name-based and manual searches to more algorithmic and gold-standard methods, with no clear consensus on best practices. This variability, compounded by challenges such as accurately disambiguating Asian names and managing unassigned gender labels, underscores the urgent need for standardized and robust methodologies. To address this critical gap, we propose the development and implementation of ``Scholarly Data Analysis (SoDA) Cards." These cards will provide a structured framework for documenting and reporting key methodological choices in scholarly data analysis, including author name disambiguation and gender identification procedures. By promoting transparency and reproducibility, SoDA Cards will facilitate more accurate comparisons and aggregations of research findings, ultimately supporting evidence-informed policymaking and enabling the longitudinal tracking of analytical approaches in the study of gender and other social biases in academia.


An Open Data Platform to Advance Gender Equality in STEM in Latin America

Communications of the ACM

Expanding the involvement of women in Science, Technology, Engineering, and Mathematics (STEM) across Latin America is crucial for economic advancement, social equity, and global competitiveness; however, these efforts have proven to be challenging. Women in the region are underrepresented in STEM10 and even more so in leadership positions.17,18 The limited availability of current information and the difficulties associated with obtaining reliable data to mitigate gender disparities create difficulties in implementing policies to reduce the gender gap in STEM. Researchers, organizations, and policymakers working to reduce the gender gap need access to dependable data to understand the root causes of gender disparities, promote evidence-based interventions, and increase accountability and transparency. In the quest for solutions to these challenges, an international research network between Bolivia, Brazil, and Peru, "Equality in Leadership for Latin America STEM" (ELLAS), emerged in 2022.6


Inclusivity in Large Language Models: Personality Traits and Gender Bias in Scientific Abstracts

Pervez, Naseela, Titus, Alexander J.

arXiv.org Artificial Intelligence

Large language models (LLMs) are increasingly utilized to assist in scientific and academic writing, helping authors enhance the coherence of their articles. Previous studies have highlighted stereotypes and biases present in LLM outputs, emphasizing the need to evaluate these models for their alignment with human narrative styles and potential gender biases. In this study, we assess the alignment of three prominent LLMs - Claude 3 Opus, Mistral AI Large, and Gemini 1.5 Flash - by analyzing their performance on benchmark text-generation tasks for scientific abstracts. We employ the Linguistic Inquiry and Word Count (LIWC) framework to extract lexical, psychological, and social features from the generated texts. Our findings indicate that, while these models generally produce text closely resembling human authored content, variations in stylistic features suggest significant gender biases. This research highlights the importance of developing LLMs that maintain a diversity of writing styles to promote inclusivity in academic discourse.


Are fairness metric scores enough to assess discrimination biases in machine learning?

Jourdan, Fanny, Risser, Laurent, Loubes, Jean-Michel, Asher, Nicholas

arXiv.org Artificial Intelligence

This paper presents novel experiments shedding light on the shortcomings of current metrics for assessing biases of gender discrimination made by machine learning algorithms on textual data. We focus on the Bios dataset, and our learning task is to predict the occupation of individuals, based on their biography. Such prediction tasks are common in commercial Natural Language Processing (NLP) applications such as automatic job recommendations. We address an important limitation of theoretical discussions dealing with group-wise fairness metrics: they focus on large datasets, although the norm in many industrial NLP applications is to use small to reasonably large linguistic datasets for which the main practical constraint is to get a good prediction accuracy. We then question how reliable are different popular measures of bias when the size of the training set is simply sufficient to learn reasonably accurate predictions. Our experiments sample the Bios dataset and learn more than 200 models on different sample sizes. This allows us to statistically study our results and to confirm that common gender bias indices provide diverging and sometimes unreliable results when applied to relatively small training and test samples. This highlights the crucial importance of variance calculations for providing sound results in this field.


The Urgent Call for Inclusive AI - Coruzant Technologies

#artificialintelligence

The new era of artificial intelligence (AI) has arrived, opening up a world of opportunity for significant progress, but also exposing the growing gender gap in digital literacy. UN Women highlights the major gender biases that have been identified in AI tools. In addition, the majority of tools are designed from a male perspective, further excluding women from usability. Interdisciplinary researchers were given access to the development stages of ChatGPT and its successor, GPT-4, and reported that "the solution may represent various societal biases and worldviews that may not be representative of the user's intent or widely shared values". As an entrepreneur, a woman, a mother, and a woman of color, I am committed to promoting inclusivity and equality in the business world.


How Artificial Intelligence can help in the bridging gender gap

#artificialintelligence

Research has repeatedly shown that diverse companies have more effective teams. The challenge is not necessarily rooted in the talent pipeline since there is more diversity in college graduates than ever before.


Bringing the Missing Women Back

Communications of the ACM

The problem of underrepresentation of women studying STEM subjects is well known and is being faced by several nations across the world. The field of computer science is no exception to this deteriorating gender ratio, nor is the Indian case. The male:female population ratio in India is 1.06, but the ratio of females making it to engineering institutions is lower, at 1.79.1 In absolute numbers, India produces around 1.5 million engineers from its 6,000 engineering institutions across the country.2 When it comes to employ-ability, 4.03% of male engineering students are employable by IT product firms, while only 2.54% of females are employable by these firms, and 16.67% of males as against 15.49% of females are employable by IT Services organizations. If we shift our focus to the employability of the graduates of top engineering institutions in the country--Indian Institutes of Technology (IITs), National Institutes of Technology (NITs), and other leading engineering educational institutions including International Institutes of Information Technology (IIITs)--employability among fresh graduates in IT product roles increases to 22.67%, and in IT Services roles, it is 36.29%.1


Meta researcher using AI to address Wikipedia's gender gap

#artificialintelligence

Meta researcher Angela Fan is employing a novel approach to get Wikipedia to include more biographies of women: She's using AI to write the rough drafts. Why it matters: Only about 20 percent of those profiled on the online encyclopedia are women, and many other groups are underrepresented on the site. How it works: Facebook's parent company is releasing as open source software an AI model that it says can automatically create high-quality biographical articles about important real-world public figures, based on information found on the web. What they're saying: "There is more work to do, but we hope this new system will one day help Wikipedia editors create many thousands of accurate, compelling biography entries for important people who are currently not on the site," Fan said in a blog post. Flashback: Fan began her project as a computer science student at the Université de Lorraine in Inria, France.


Uncovering the Source of Machine Bias

Hu, Xiyang, Huang, Yan, Li, Beibei, Lu, Tian

arXiv.org Machine Learning

We develop a structural econometric model to capture the decision dynamics of human evaluators on an online micro-lending platform, and estimate the model parameters using a real-world dataset. We find two types of biases in gender, preference-based bias and belief-based bias, are present in human evaluators' decisions. Both types of biases are in favor of female applicants. Through counterfactual simulations, we quantify the effect of gender bias on loan granting outcomes and the welfare of the company and the borrowers. Our results imply that both the existence of the preference-based bias and that of the belief-based bias reduce the company's profits. When the preference-based bias is removed, the company earns more profits. When the belief-based bias is removed, the company's profits also increase. Both increases result from raising the approval probability for borrowers, especially male borrowers, who eventually pay back loans. For borrowers, the elimination of either bias decreases the gender gap of the true positive rates in the credit risk evaluation. We also train machine learning algorithms on both the real-world data and the data from the counterfactual simulations. We compare the decisions made by those algorithms to see how evaluators' biases are inherited by the algorithms and reflected in machine-based decisions. We find that machine learning algorithms can mitigate both the preference-based bias and the belief-based bias.