black people
Controversial Dilbert cartoonist Scott Adams dies aged 68
Scott Adams, the US cartoonist who wrote and illustrated the comic strip Dilbert, has died of cancer at the age of 68. His ex-wife Shelly Miles announced his death on Tuesday during a live stream of his podcast, Real Coffee with Scott Adams. The satirical cartoon strip - about a competent but frustrated engineer and his dysfunctional workplace environment - was first published in 1989, and went on to feature in more than 2,000 newspapers in 65 countries. The character also later appeared in books, an animated TV series and video game. But in 2023, his comic strip was cancelled by newspapers including the Washington Post after Adams was accused of making racist comments about black people.
- North America > United States (0.51)
- North America > Central America (0.16)
- Oceania > Australia (0.06)
- (14 more...)
- Media > News (1.00)
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area > Oncology (0.36)
Association of Objects May Engender Stereotypes: Mitigating Association-Engendered Stereotypes in Text-to-Image Generation
Text-to-Image (T2I) has witnessed significant advancements, demonstrating superior performance for various generative tasks. However, the presence of stereotypes in T2I introduces harmful biases that require urgent attention as the T2I technology becomes more prominent.Previous work for stereotype mitigation mainly concentrated on mitigating stereotypes engendered with individual objects within images, which failed to address stereotypes engendered by the association of multiple objects, referred to as . For example, mentioning ''black people'' and ''houses'' separately in prompts may not exhibit stereotypes. Nevertheless, when these two objects are associated in prompts, the association of ''black people'' with ''poorer houses'' becomes more pronounced. To tackle this issue, we propose a novel framework, MAS, to Mitigate Association-engendered Stereotypes.
DEI Died This Year. Maybe It Was Supposed To
My position feels more precarious than ever. It's a question that I sometimes toss out in the company of friends who--like me, and maybe like you--have a complicated relationship to their job. I've worked at WIRED as a writer for eight years, and with much success. Eight years is also an eternity in news media, and especially if you are Black. All industries suffer from unique growing pains. Ours just so happens to have laughably high turnover rates, a distaste for racial and gender diversity, and the dubious distinction of being perpetually on the verge of extinction. So on nights when friends and I gather, trading war stories of workplace microaggressions and corporate mismanagement under damp bar lighting, we wonder how we've lasted as long as we have. The only reason I've survived, I joke, is because I'm Black. It's a silly thing to say, particularly because I have no actual proof of it other than the occasional feeling. What I do know is that I've been The Only One in more spaces than I care to remember, and rarely by choice.
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > New York (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- (3 more...)
- Information Technology > Communications > Social Media (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.47)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- (9 more...)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- (9 more...)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.98)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
'Pretty revolutionary': a Brooklyn exhibit interrogates white-dominated AI to make it more inclusive
At the Plaza at 300 Ashland Place in downtown Brooklyn, patrons mill around a large yellow shipping container with black triangles painted on its side. A nod to the flying geese quilt pattern, which may have served as a coded message for enslaved people escaping to freedom along the Underground Railroad, the design and container serve as a bridge between the past and the future of the African diaspora. At the center of the art project by the Brooklyn-based transmedia artist Stephanie Dinkins, a large screen displays artificial intelligence (AI) generated images that showcase the diversity of the city. Commissioned by the New York-based art non-profit More Art and designed in collaboration with the architects LOT-EK, the AI laboratory If We Don't, Who Will? will be on display until 28 September. It seeks to challenge a white-dominated generative-AI space by highlighting Black ethos and cultural cornerstones.
- North America > United States > New York (0.25)
- North America > Canada > Ontario > Toronto (0.15)
Association of Objects May Engender Stereotypes: Mitigating Association-Engendered Stereotypes in Text-to-Image Generation
Text-to-Image (T2I) has witnessed significant advancements, demonstrating superior performance for various generative tasks. However, the presence of stereotypes in T2I introduces harmful biases that require urgent attention as the T2I technology becomes more prominent.Previous work for stereotype mitigation mainly concentrated on mitigating stereotypes engendered with individual objects within images, which failed to address stereotypes engendered by the association of multiple objects, referred to as Association-Engendered Stereotypes. For example, mentioning ''black people'' and ''houses'' separately in prompts may not exhibit stereotypes. Nevertheless, when these two objects are associated in prompts, the association of ''black people'' with ''poorer houses'' becomes more pronounced. To tackle this issue, we propose a novel framework, MAS, to Mitigate Association-engendered Stereotypes.
Editable Fairness: Fine-Grained Bias Mitigation in Language Models
Chen, Ruizhe, Li, Yichen, Yang, Jianfei, Zhou, Joey Tianyi, Liu, Zuozhu
Generating fair and accurate predictions plays a pivotal role in deploying large language models (LLMs) in the real world. However, existing debiasing methods inevitably generate unfair or incorrect predictions as they are designed and evaluated to achieve parity across different social groups but leave aside individual commonsense facts, resulting in modified knowledge that elicits unreasonable or undesired predictions. In this paper, we first establish a new bias mitigation benchmark, BiaScope, which systematically assesses performance by leveraging newly constructed datasets and metrics on knowledge retention and generalization. Then, we propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases. FAST identifies the decisive layer responsible for storing social biases and then calibrates its outputs by integrating a small modular network, considering both bias mitigation and knowledge-preserving demands. Comprehensive experiments demonstrate that FAST surpasses state-of-the-art baselines with superior debiasing performance while not compromising the overall model capability for knowledge retention and downstream predictions. This highlights the potential of fine-grained debiasing strategies to achieve fairness in LLMs. Code will be publicly available. Warning: this paper contains content that may be offensive or upsetting.
- Asia > Afghanistan (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
BiasAlert: A Plug-and-play Tool for Social Bias Detection in LLMs
Fan, Zhiting, Chen, Ruizhe, Xu, Ruiling, Liu, Zuozhu
Evaluating the bias in Large Language Models (LLMs) becomes increasingly crucial with their rapid development. However, existing evaluation methods rely on fixed-form outputs and cannot adapt to the flexible open-text generation scenarios of LLMs (e.g., sentence completion and question answering). To address this, we introduce BiasAlert, a plug-and-play tool designed to detect social bias in open-text generations of LLMs. BiasAlert integrates external human knowledge with inherent reasoning capabilities to detect bias reliably. Extensive experiments demonstrate that BiasAlert significantly outperforms existing state-of-the-art methods like GPT4-as-A-Judge in detecting bias. Furthermore, through application studies, we demonstrate the utility of BiasAlert in reliable LLM bias evaluation and bias mitigation across various scenarios. Model and code will be publicly released.
- Europe > United Kingdom (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
"Clipped," Reviewed: A Romp Back Through an N.B.A. Racism Scandal
One upshot of the current glut of streaming platforms is a flood of programming to fill them: something for every attention span, something to plug every potential gap of viewer inactivity that might render a certain streaming service irrelevant while some other service pulls ahead. And so stories get told and retold. The romantic comedies begin to feel the same. The dating reality shows rely (often successfully, it must be said) on the same dramatic tricks. Another consequence of this, for better or worse, is that the stories being told are pulling from more immediate memory.
- North America > United States > California > Los Angeles County > Los Angeles (0.07)
- Asia > Middle East > Israel (0.05)
- Law (1.00)
- Leisure & Entertainment > Sports > Basketball (0.97)
- Media > Television (0.68)