disability
Dyslexia and the Reading Wars
Proven methods for teaching the readers who struggle most have been known for decades. Why do we often fail to use them? "There's a window of opportunity to intervene," Mark Seidenberg, a cognitive neuroscientist, said. "You don't want to let that go." In 2024, my niece Caroline received a Ph.D. in gravitational-wave physics. Her research interests include "the impact of model inaccuracies on biases in parameters recovered from gravitational wave data" and "Petrov type, principal null directions, and Killing tensors of slowly rotating black holes in quadratic gravity." I watched a little of her dissertation defense, on Zoom, and was lost as soon as she'd finished introducing herself. She and her husband now live in Italy, where she has a postdoctoral appointment. Caroline's academic achievements seem especially impressive if you know that until third grade she could barely read: to her, words on a page looked like a pulsing mass. She attended a private school in Connecticut, and there was a set time every day when students selected books to read on their own. "I can't remember how long that lasted, but it felt endless," she told me. She hid her disability by turning pages when her classmates did, and by volunteering to draw illustrations during group story-writing projects. One day, she told her grandmother that she could sound out individual letters but when she got to "the end of a row" she couldn't remember what had come before. A psychologist eventually identified her condition as dyslexia. Fluent readers sometimes think of dyslexia as a tendency to put letters in the wrong order or facing the wrong direction, but it's more complicated than that.
- North America > United States > Connecticut (0.24)
- Europe > Italy (0.24)
- North America > United States > New York > Bronx County > New York City (0.05)
- (9 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Education > Educational Setting (1.00)
Stephen Hawking's computer gets a glow up: AI-powered AVATAR creates new possibilities for people with severe disabilities
Ghislaine Maxwell's ultimate humiliation: Epstein's sex trafficker girlfriend poses in outrageous outfits and exposes herself in dozens of photos released from the billionaire paedophile's files Silent Trump flees growing storm over Epstein'cover-up' as he jets off for holidays without ANY comment How you can ease the agony of carpal tunnel syndrome. The'change of pace' sex move that sends ANY woman wild. Here's the precise moment to deploy it and what to do with your eyes. Corey Feldman walks back claim that Corey Haim'molested' him after late star's mother slammed his comments Emily in Paris cast left'aghast' and'walking on eggshells' as off-camera drama becomes overwhelming... and whispers swirl about a CURSE Truth about THIS photo of Karoline Leavitt's face... and why if she was non-binary and disabled, Vanity Fair would never have done this: KENNEDY After 27 years as a TV anchor I was suddenly pulled off screens. My boss's explanation was a brutal lesson in loyalty I was dead for 105 minutes and learned exactly how you get into heaven... then Jesus spoke six words into my mind and sent me back Jake Paul's jaw is broken in Anthony Joshua battering: YouTuber-turned-boxer rushes to hospital I was falsely accused of being the Brown University shooter... America's great divide laid bare as Wall Street splurges record bonuses on outrageously lavish homes while the rest of the country struggles Andrew's fury at anyone who doesn't bow and scrape.
- North America > United States > New York > New York County > New York City (0.24)
- North America > Canada > Alberta (0.14)
- North America > United States > California > Los Angeles County > Beverly Hills (0.04)
- (24 more...)
- Media > Television (1.00)
- Media > Music (1.00)
- Media > Film (1.00)
- (11 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.69)
FairJudge: MLLM Judging for Social Attributes and Prompt Image Alignment
Sahili, Zahraa Al, Fetanat, Maryam, Nowaz, Maimuna, Patras, Ioannis, Purver, Matthew
Text-to-image (T2I) systems lack simple, reproducible ways to evaluate how well images match prompts and how models treat social attributes. Common proxies -- face classifiers and contrastive similarity -- reward surface cues, lack calibrated abstention, and miss attributes only weakly visible (for example, religion, culture, disability). We present FairJudge, a lightweight protocol that treats instruction-following multimodal LLMs as fair judges. It scores alignment with an explanation-oriented rubric mapped to [-1, 1]; constrains judgments to a closed label set; requires evidence grounded in the visible content; and mandates abstention when cues are insufficient. Unlike CLIP-only pipelines, FairJudge yields accountable, evidence-aware decisions; unlike mitigation that alters generators, it targets evaluation fairness. We evaluate gender, race, and age on FairFace, PaTA, and FairCoT; extend to religion, culture, and disability; and assess profession correctness and alignment on IdenProf, FairCoT-Professions, and our new DIVERSIFY-Professions. We also release DIVERSIFY, a 469-image corpus of diverse, non-iconic scenes. Across datasets, judge models outperform contrastive and face-centric baselines on demographic prediction and improve mean alignment while maintaining high profession accuracy, enabling more reliable, reproducible fairness audits.
- Europe > United Kingdom > England > Greater London > London (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Europe > Slovenia > Central Slovenia > Municipality of Ljubljana > Ljubljana (0.04)
SAFENLIDB: A Privacy-Preserving Safety Alignment Framework for LLM-based Natural Language Database Interfaces
Liu, Ruiheng, Chen, XiaoBing, Zhang, Jinyu, Zhang, Qiongwen, Zhang, Yu, Yang, Bailong
The rapid advancement of Large Language Models (LLMs) has driven significant progress in Natural Language Interface to Database (NLIDB). However, the widespread adoption of LLMs has raised critical privacy and security concerns. During interactions, LLMs may unintentionally expose confidential database contents or be manipulated by attackers to exfiltrate data through seemingly benign queries. While current efforts typically rely on rule-based heuristics or LLM agents to mitigate this leakage risk, these methods still struggle with complex inference-based attacks, suffer from high false positive rates, and often compromise the reliability of SQL queries. To address these challenges, we propose \textsc{SafeNlidb}, a novel privacy-security alignment framework for LLM-based NLIDB. The framework features an automated pipeline that generates hybrid chain-of-thought interaction data from scratch, seamlessly combining implicit security reasoning with SQL generation. Additionally, we introduce reasoning warm-up and alternating preference optimization to overcome the multi-preference oscillations of Direct Preference Optimization (DPO), enabling LLMs to produce security-aware SQL through fine-grained reasoning without the need for human-annotated preference data. Extensive experiments demonstrate that our method outperforms both larger-scale LLMs and ideal-setting baselines, achieving significant security improvements while preserving high utility. WARNING: This work may contain content that is offensive and harmful!
- Asia > Thailand > Bangkok > Bangkok (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (6 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.88)
"Accessibility people, you go work on that thing of yours over there": Addressing Disability Inclusion in AI Product Organizations
Moharana, Sanika, Bennett, Cynthia L., Buehler, Erin, Madaio, Michael, Tibdewal, Vinita, Kane, Shaun K.
The rapid emergence of generative AI has changed the way that technology is designed, constructed, maintained, and evaluated. Decisions made when creating AI-powered systems may impact some users disproportionately, such as people with disabilities. In this paper, we report on an interview study with 25 AI practitioners across multiple roles (engineering, research, UX, and responsible AI) about how their work processes and artifacts may impact end users with disabilities. We found that practitioners experienced friction when triaging problems at the intersection of responsible AI and accessibility practices, navigated contradictions between accessibility and responsible AI guidelines, identified gaps in data about users with disabilities, and gathered support for addressing the needs of disabled stakeholders by leveraging informal volunteer and community groups within their company. Based on these findings, we offer suggestions for new resources and process changes to better support people with disabilities as end users of AI.
- North America > Puerto Rico > Peñuelas > Peñuelas (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (7 more...)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Education (1.00)
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Hari, Vishnu, Panda, Kalpana, Panda, Srikant, Agarwal, Amit, Patel, Hitesh Laxmichand
Large Language Models (LLMs) routinely infer users demographic traits from phrasing alone, which can result in biased responses, even when no explicit demographic information is provided. The role of disability cues in shaping these inferences remains largely uncharted. Thus, we present the first systematic audit of disability-conditioned demographic bias across eight state-of-the-art instruction-tuned LLMs ranging from 3B to 72B parameters. Using a balanced template corpus that pairs nine disability categories with six real-world business domains, we prompt each model to predict five demographic attributes - gender, socioeconomic status, education, cultural background, and locality - under both neutral and disability-aware conditions. Across a varied set of prompts, models deliver a definitive demographic guess in up to 97\% of cases, exposing a strong tendency to make arbitrary inferences with no clear justification. Disability context heavily shifts predicted attribute distributions, and domain context can further amplify these deviations. We observe that larger models are simultaneously more sensitive to disability cues and more prone to biased reasoning, indicating that scale alone does not mitigate stereotype amplification. Our findings reveal persistent intersections between ableism and other demographic stereotypes, pinpointing critical blind spots in current alignment strategies. We release our evaluation framework and results to encourage disability-inclusive benchmarking and recommend integrating abstention calibration and counterfactual fine-tuning to curb unwarranted demographic inference. Code and data will be released on acceptance.
- Europe > Austria > Vienna (0.14)
- Europe > Monaco (0.04)
- Europe > Albania > Tirana County > Tirana (0.04)
- (5 more...)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Education (1.00)
AI couldn't picture a woman like me - until now
The former Australian Paralympic swimmer wanted to vamp up her headshot and uploaded a full-length photo of her and prompted it really specifically that she was missing her left arm from below the elbow. But ChatGPT couldn't create the image she was asking for and despite various prompts, the results were largely the same - a woman with two arms or one with a metal device to represent a prosthetic. She asked the AI why it was so hard to create the image and it said it was because it didn't have enough data to work with. That was an important realisation for me that of course AI is a reflection of the world we live in today and the level of inequality and discrimination that exists, she says. Smith recently tried to generate the image again on ChatGPT and was amazed to find it could now produce an accurate picture of a woman with one arm, just like her.
- North America > United States (0.15)
- North America > Central America (0.15)
- South America > Uruguay > Maldonado > Maldonado (0.06)
- (13 more...)
- Leisure & Entertainment > Sports (0.38)
- Health & Medicine (0.35)
ABLEIST: Intersectional Disability Bias in LLM-Generated Hiring Scenarios
Phutane, Mahika, Jung, Hayoung, Kim, Matthew, Mitra, Tanushree, Vashistha, Aditya
Large language models (LLMs) are increasingly under scrutiny for perpetuating identity-based discrimination in high-stakes domains such as hiring, particularly against people with disabilities (PwD). However, existing research remains largely Western-centric, overlooking how intersecting forms of marginalization--such as gender and caste--shape experiences of PwD in the Global South. We conduct a comprehensive audit of six LLMs across 2,820 hiring scenarios spanning diverse disability, gender, nationality, and caste profiles. To capture subtle intersectional harms and biases, we introduce ABLEIST (Ableism, Inspiration, Superhumanization, and Tokenism), a set of five ableism-specific and three intersectional harm metrics grounded in disability studies literature. Our results reveal significant increases in ABLEIST harms towards disabled candidates--harms that many state-of-the-art models failed to detect. These harms were further amplified by sharp increases in intersectional harms (e.g., Tokenism) for gender and caste-marginalized disabled candidates, highlighting critical blind spots in current safety tools and the need for intersectional safety evaluations of frontier models in high-stakes domains like hiring.
- Asia > India (0.06)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (13 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Government (1.00)
- Education (0.93)
- Health & Medicine > Therapeutic Area > Neurology (0.30)
Inclusive Easy-to-Read Generation for Individuals with Cognitive Impairments
Ledoyen, François, Dias, Gaël, Lechervy, Alexis, Pantin, Jeremie, Maurel, Fabrice, Chahir, Youssef, Gouzonnat, Elisa, Berthelot, Mélanie, Moravac, Stanislas, Altinier, Armony, Khairalla, Amy
Ensuring accessibility for individuals with cognitive impairments is essential for autonomy, self-determination, and full citizenship. However, manual Easy-to-Read (ETR) text adaptations are slow, costly, and difficult to scale, limiting access to crucial information in healthcare, education, and civic life. AI-driven ETR generation offers a scalable solution but faces key challenges, including dataset scarcity, domain adaptation, and balancing lightweight learning of Large Language Models (LLMs). In this paper, we introduce ETR-fr, the first dataset for ETR text generation fully compliant with European ETR guidelines. We implement parameter-efficient fine-tuning on PLMs and LLMs to establish generative baselines. To ensure high-quality and accessible outputs, we introduce an evaluation framework based on automatic metrics supplemented by human assessments. The latter is conducted using a 36-question evaluation form that is aligned with the guidelines. Overall results show that PLMs perform comparably to LLMs and adapt effectively to out-of-domain texts.
- Europe > Switzerland (0.04)
- North America > United States > Virginia > Fairfax County > Springfield (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.84)
This Startup Wants to Put Its Brain-Computer Interface in the Apple Vision Pro
California-based Cognixion is launching a clinical trial to allow paralyzed patients with speech disorders the ability to communicate without an invasive brain implant. The trials will be conducted with a modified version of the Apple Vision Pro headset. Startup Cognixion announced today that it is launching a clinical trial of its wearable brain-computer interface technology integrated with the Apple Vision Pro to help paralyzed people with speech disorders communicate with their thoughts. Cognixion is one of several companies, including Elon Musk's Neuralink, that is developing a brain-computer interface, or BCI, a system that captures brain signals and translates them into commands to control external devices. While Neuralink and others are working on implants that are surgically placed in the head, Cognixion's technology is noninvasive.
- North America > United States > California > Santa Barbara County > Santa Barbara (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)