darker skin
Predictive Representativity: Uncovering Racial Bias in AI-based Skin Cancer Detection
Morales-Forero, Andrés, Rueda, Lili J., Herrera, Ronald, Bassetto, Samuel, Coatanea, Eric
Artificial intelligence (AI) systems increasingly inform medical decision-making, yet concerns about algorithmic bias and inequitable outcomes persist, particularly for historically marginalized populations. This paper introduces the concept of Predictive Representativity (PR), a framework of fairness auditing that shifts the focus from the composition of the data set to outcomes-level equity. Through a case study in dermatology, we evaluated AI-based skin cancer classifiers trained on the widely used HAM10000 dataset and on an independent clinical dataset (BOSQUE Test set) from Colombia. Our analysis reveals substantial performance disparities by skin phototype, with classifiers consistently underperforming for individuals with darker skin, despite proportional sampling in the source data. We argue that representativity must be understood not as a static feature of datasets but as a dynamic, context-sensitive property of model predictions. PR operationalizes this shift by quantifying how reliably models generalize fairness across subpopulations and deployment contexts. We further propose an External Transportability Criterion that formalizes the thresholds for fairness generalization. Our findings highlight the ethical imperative for post-hoc fairness auditing, transparency in dataset documentation, and inclusive model validation pipelines. This work offers a scalable tool for diagnosing structural inequities in AI systems, contributing to discussions on equity, interpretability, and data justice and fostering a critical re-evaluation of fairness in data-driven healthcare.
- South America > Colombia (0.24)
- North America > Canada > Quebec > Montreal (0.04)
- Oceania > Australia (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Skin Cancer (0.71)
'It doesn't work': Migrants struggle with US immigration app
Tijuana, Mexico – Standing in a common area of the Casa del Migrante shelter in the Mexican border city of Tijuana, Maria taps her phone screen but can't get the app she is using to work. Maria and her family fled their native Haiti to Venezuela years ago. But recent Venezuelan economic and political instability forced them to leave that country, too, and she said they are now hoping to apply for asylum in the United States. But she and her husband and daughter have tried every day for the last month to get a US immigration appointment through the country's new CBP One app -- to no avail. And without a CBP One appointment, the family faces steep consequences should they try to cross the border irregularly, including being deported back to Haiti and barred from entering the US for up to five years.
- North America > United States (1.00)
- North America > Haiti (0.76)
- North America > Mexico (0.60)
- (2 more...)
AI-produced images can't fix diversity issues in dermatology databases
Image databases of skin conditions are notoriously biased towards lighter skin. Rather than wait for the slow process of collecting more images of conditions like cancer or inflammation on darker skin, one group wants to fill in the gaps using artificial intelligence. It's working on an AI program to generate synthetic images of diseases on darker skin -- and using those images for a tool that could help diagnose skin cancer. "Having real images of darker skin is the ultimate solution," says Eman Rezk, a machine learning expert at McMaster University in Canada working on the project. "Until we have that data, we need to find a way to close the gap."
Technology for detecting skin cancer is forging ahead – but not for people of color, apparently
Artificial intelligence has drawn scrutiny for perpetuating the biases of the mostly white tech guys developing it. Much of the criticism has swirled around the facial recognition algorithms used in surveillance technology, shown to have higher error rates for women and BIPOC, per the ACLU, increasing their risk of wrongful arrest and police violence. Now, a new analysis reveals an insidious way that AI can widen racial health disparities, too. Researchers found that the datasets used to train AI programs to detect skin cancer includes hardly any images of dark skin, according to a National Cancer Research Institute press release. Simply put, this technology is being optimized for light skin.
Medical photography is failing patients with darker skin
But Jenna Lester, a dermatologist at the University of California San Francisco, was growing frustrated with the poor quality images she'd receive of her dark-skinned patients. It wasn't just a cosmetic issue -- the bad photos meant darker-skinned people weren't getting the same quality of care. So in January, Lester co-authored a paper in the British Journal of Dermatology that gives a step-by-step guide to photographing skin of color accurately in clinical settings. Lester, who herself is Black, said, "I feel like these issues and my life is constantly me saying, 'Hey, what about us?' 'What about these patients?'" Medical photographs are vital to documenting disease in textbooks and journals and training medical students.
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- Media > Photography (0.91)
Champion inclusivity: A look inside the insidious world of AI wit Albert Myles - The Bossy Bees
Thank you for joining us the bossy bees. I'm sitting down with Albert miles today to talk about artificial intelligence, or AI. We're excited for all the amazing capabilities this technology will bring. But we're talking about some of the insidious ways in which it can be applied. Don't forget to check out the bossy bees on Patreon for exclusive content on this podcast. Want me to go ahead? My name is Albert Myles. And I am what they call a knowledge program manager in customer content services for a large tech company, located in RTP. And that's a fancy way of saying that I am responsible for ensuring that the knowledge that's captured in support and in the development and in side of customer content is transferred to other areas effectively and efficiently. At the end of the day, I tell people, I try to help our company, learn what it already knows. And I try to help us organize what we already know. And then I help us try to distribute what all everything that we know. And it's a very, very, very new program, but I'm having fun getting it launched. And that's where we started together, and you've taken it miles and miles and miles away from where it started. And you are, I think, you know, I really dislike you putting that title on yourself, because you do so much more than, like your, your knowledge is far beyond that. And it does come together. It really does come together nicely. In your job, you know, but I think that the reason you're, you know, program has gone so far is because you bring so much experience like what we're talking about today, like you, you have such an affinity and inclination for technology that it brings a lot to the table. And then also married to something that you and I are both pretty passionate about, which is diversity, inclusion, Justice type of stuff.
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.46)
- Health & Medicine > Therapeutic Area > Immunology (0.46)
Artificial Intelligence: Can We Trust Machines to Make Fair Decisions?
But what happens when artificial intelligence is biased? What if it makes mistakes on important decisions -- from who gets a job interview or a mortgage to who gets arrested and how much time they ultimately serve for a crime? "These everyday decisions can greatly affect the trajectories of our lives and increasingly, they're being made not by people, but by machines," said UC Davis computer science professor Ian Davidson. A growing body of research, including Davidson's, indicates that bias in artificial intelligence can lead to biased outcomes, especially for minority populations and women. Facial recognition technologies, for example, have come under increasing scrutiny because they've been shown to better detect white faces than they do the faces of people with darker skin.
Why Representation Matters When Building AI
More and more tech companies have initiatives in place to support Diversity, Equity & Inclusion (DEI) work. But even as Chief Diversity Officers get hired and diversity statements make their way onto company websites, diverse representation in tech is still lagging. This representation deficit, particularly in product and engineering departments, has huge implications. With the current population of software engineers comprising 25% women, 7.3% Latinos and 4.7% Black people, the teams building technology are not adequately representing the people using it. Artificial Intelligence (AI) is an area of computer science that focuses on enabling computers to perform tasks that have traditionally required human intelligence.
Artificial intelligence is making the beauty industry work for everyone
Atima Lui was in primary school when she first learned that "nude" is not universal. Now 30, she still recalls playing with a white friend's makeup and struggling to find colours that complemented her rich skin tone. "I would try to put [her makeup] on and it would just make me look like a clown," says Lui, who is of Sudanese and African-American descent. "I think back to growing up and how my mother barely wore makeup. Now I know it's because makeup just wasn't made for her."
- Africa > Sudan (0.25)
- North America > United States > Kansas > Shawnee County > Topeka (0.05)
- North America > Canada > Ontario > Toronto (0.05)
- Europe > Sweden (0.05)
A Flawed Facial Recognition System Sent This Man to Jail
In January, Detroit police arrested and charged 42-year-old Robert Williams with stealing $4,000 in watches from a retail store 15 months earlier. Taken away in handcuffs in front of his two children, Williams was sent to an interrogation room where police presented him with their evidence: Facial recognition software matched his driver's license photo with surveillance footage from the night of the crime. Williams had an alibi, The New York Times reports, and immediately denied the charges. Police pointed to the image of the suspect from the night of the theft. "I just see a big black guy," he told NPR.
- North America > United States > Michigan (0.08)
- North America > United States > New York (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)