surname
Musk's AI firm forced to delete posts after chatbot praises Hitler and makes antisemitic comments
Elon Musk's AI firm has been forced to delete posts after the Grok chatbot praised Hitler and made a string of deeply antisemitic posts. The company xAI said it had removed'inappropriate' social media posts today following complaints from users. These posts followed Musk's announcement that he was taking measures to ensure the AI bot was more'politically incorrect'. Over the following days, the AI began repeatedly referring to itself as'MechaHitler' and said that Hitler would have'plenty' of solutions to'restore family values' to America. In a post on X, xAI wrote: 'We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. 'Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. 'xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.'
- North America > United States (0.30)
- Africa (0.15)
- Government (0.71)
- Law Enforcement & Public Safety (0.48)
Musk's AI firm forced to delete posts praising Hitler from Grok chatbot
Elon Musk's artificial intelligence firm xAI has deleted "inappropriate" posts on X after the company's chatbot, Grok, began praising Adolf Hitler, referring to itself as MechaHitler and making antisemitic comments in response to user queries. In some now-deleted posts, it referred to a person with a common Jewish surname as someone who was "celebrating the tragic deaths of white kids" in the Texas floods as "future fascists". "Classic case of hate dressed as activism – and that surname? Every damn time, as they say," the chatbot commented. In another post it said, "Hitler would have called it out and crushed it."
- North America > United States > Texas (0.26)
- Africa (0.18)
Doctor Who 'Lux' review: Hope can change the world
It's an interesting time to be a long-running science fantasy media property in the streaming TV age. Star Trek is in the grip of an existential crisis as it (wrongly) fears it's too old-aged to be relevant. Star Wars became a battlefield in the culture war and, to duck all future bad faith criticism, gave us The Rise of Skywalker. And then there's Doctor Who, which is somehow managing to plough a 62-year furrow and still fill it with original ideas. This week the Doctor and Belinda go up against a sentient cartoon holding the patrons of a 1950s cinema hostage.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
Algorithmic Inheritance: Surname Bias in AI Decisions Reinforces Intergenerational Inequality
Pataranutaporn, Pat, Powdthavee, Nattavudh, Maes, Pattie
Surnames often convey implicit markers of social status, wealth, and lineage, shaping perceptions in ways that can perpetuate systemic biases and intergenerational inequality. This study is the first of its kind to investigate whether and how surnames influence AI-driven decision-making, focusing on their effects across key areas such as hiring recommendations, leadership appointments, and loan approvals. Using 72,000 evaluations of 600 surnames from the United States and Thailand, two countries with distinct sociohistorical contexts and surname conventions, we classify names into four categories: Rich, Legacy, Normal, and phonetically similar Variant groups. Our findings show that elite surnames consistently increase AI-generated perceptions of power, intelligence, and wealth, which in turn influence AI-driven decisions in high-stakes contexts. Mediation analysis reveals perceived intelligence as a key mechanism through which surname biases influence AI decision-making process. While providing objective qualifications alongside surnames mitigates most of these biases, it does not eliminate them entirely, especially in contexts where candidate credentials are low. These findings highlight the need for fairness-aware algorithms and robust policy measures to prevent AI systems from reinforcing systemic inequalities tied to surnames, an often-overlooked bias compared to more salient characteristics such as race and gender. Our work calls for a critical reassessment of algorithmic accountability and its broader societal impact, particularly in systems designed to uphold meritocratic principles while counteracting the perpetuation of intergenerational privilege.
- Asia > Thailand (0.29)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Government (1.00)
- Education (1.00)
- Banking & Finance (0.93)
Statistical Uncertainty in Word Embeddings: GloVe-V
Vallebueno, Andrea, Handan-Nader, Cassandra, Manning, Christopher D., Ho, Daniel E.
Static word embeddings are ubiquitous in computational social science applications and contribute to practical decision-making in a variety of fields including law and healthcare. However, assessing the statistical uncertainty in downstream conclusions drawn from word embedding statistics has remained challenging. When using only point estimates for embeddings, researchers have no streamlined way of assessing the degree to which their model selection criteria or scientific conclusions are subject to noise due to sparsity in the underlying data used to generate the embeddings. We introduce a method to obtain approximate, easy-to-use, and scalable reconstruction error variance estimates for GloVe (Pennington et al., 2014), one of the most widely used word embedding models, using an analytical approximation to a multivariate normal model. To demonstrate the value of embeddings with variance (GloVe-V), we illustrate how our approach enables principled hypothesis testing in core word embedding tasks, such as comparing the similarity between different word pairs in vector space, assessing the performance of different models, and analyzing the relative degree of ethnic or gender bias in a corpus using different word lists.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- (16 more...)
Can LLMs facilitate interpretation of pre-trained language models?
Mousi, Basel, Durrani, Nadir, Dalvi, Fahim
Work done to uncover the knowledge encoded within pre-trained language models rely on annotated corpora or human-in-the-loop methods. However, these approaches are limited in terms of scalability and the scope of interpretation. We propose using a large language model, ChatGPT, as an annotator to enable fine-grained interpretation analysis of pre-trained language models. We discover latent concepts within pre-trained language models by applying agglomerative hierarchical clustering over contextualized representations and then annotate these concepts using ChatGPT. Our findings demonstrate that ChatGPT produces accurate and semantically richer annotations compared to human-annotated concepts. Additionally, we showcase how GPT-based annotations empower interpretation analysis methodologies of which we demonstrate two: probing frameworks and neuron interpretation. To facilitate further exploration and experimentation in the field, we make available a substantial ConceptNet dataset (TCN) comprising 39,000 annotated concepts.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > New York (0.04)
- Europe > Middle East (0.04)
- (18 more...)
- Leisure & Entertainment > Sports (1.00)
- Media (0.93)
- Law Enforcement & Public Safety (0.68)
- Health & Medicine (0.68)
Can We Trust Race Prediction?
In this paper, I train a Bidirectional Long Short-Term Memory (BiLSTM) model on a novel dataset of voter registration data from all 50 US states and create an ensemble that achieves up to 36.8% higher out of sample (OOS) F1 scores than the best performing machine learning models in the literature. Additionally, I construct the most comprehensive database of first and surname distributions in the US in order to improve the coverage and accuracy of Bayesian Improved Surname Geocoding (BISG) and Bayesian Improved Firstname Surname Geocoding (BIFSG). Finally, I provide the first high-quality benchmark dataset in order to fairly compare existing models and aid future model developers.
- North America > United States > Georgia (0.14)
- North America > United States > Florida (0.04)
- North America > United States > Texas (0.04)
- (10 more...)
- Government > Voting & Elections (1.00)
- Banking & Finance (0.93)
- Government > Regional Government > North America Government > United States Government (0.47)
Predicting affinity ties in a surname network
From administrative registers of last names in Santiago, Chile, we create a surname affinity network that encodes socioeconomic data. This network is a multi-relational graph with nodes representing surnames and edges representing the prevalence of interactions between surnames by socioeconomic decile. We model the prediction of links as a knowledge base completion problem, and find that sharing neighbors is highly predictive of the formation of new links. Importantly, We distinguish between grounded neighbors and neighbors in the embedding space, and find that the latter is more predictive of tie formation. The paper discusses the implications of this finding in explaining the high levels of elite endogamy in Santiago.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.46)
- North America > United States (0.04)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.94)
- Information Technology > Communications (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
Common names in Burkina Faso, West-Africa
Burkina Faso is a multi-cultural and diverse country with a rich history. In this article, we explore how personal names can be interpreted to reflect regional, ethnic appartenance within the country. Then we illustrate how the use of a personal name can affect a black-box Artificial Intelligence – such as OpenAI's DALL-E. This is a first article in our series of blog posts with tag #thisnamedpersondoesnotexist.
- Africa > Burkina Faso (0.66)
- Africa > West Africa (0.40)
- North America > United States (0.16)
- Europe (0.05)
Addressing Census data problems in race imputation via fully Bayesian Improved Surname Geocoding and name supplements
Imai, Kosuke, Olivella, Santiago, Rosenman, Evan T. R.
Prediction of individual's race and ethnicity plays an important role in social science and public health research. Examples include studies of racial disparity in health and voting. Recently, Bayesian Improved Surname Geocoding (BISG), which uses Bayes' rule to combine information from Census surname files with the geocoding of an individual's residence, has emerged as a leading methodology for this prediction task. Unfortunately, BISG suffers from two Census data problems that contribute to unsatisfactory predictive performance for minorities. First, the decennial Census often contains zero counts for minority racial groups in the Census blocks where some members of those groups reside. Second, because the Census surname files only include frequent names, many surnames -- especially those of minorities -- are missing from the list. To address the zero counts problem, we introduce a fully Bayesian Improved Surname Geocoding (fBISG) methodology that accounts for potential measurement error in Census counts by extending the naive Bayesian inference of the BISG methodology to full posterior inference. To address the missing surname problem, we supplement the Census surname data with additional data on last, first, and middle names taken from the voter files of six Southern states where self-reported race is available. Our empirical validation shows that the fBISG methodology and name supplements significantly improve the accuracy of race imputation across all racial groups, and especially for Asians. The proposed methodology, together with additional name data, is available via the open-source software WRU.
- North America > United States > Georgia (0.14)
- North America > United States > North Carolina (0.05)
- North America > United States > South Carolina (0.04)
- (7 more...)
- Government > Voting & Elections (0.69)
- Government > Regional Government (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.86)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.66)