marginalised group
Algorithmic Fairness: Not a Purely Technical but Socio-Technical Property
Bian, Yijun, You, Lei, Sasaki, Yuya, Maeda, Haruka, Igarashi, Akira
The rapid trend of deploying artificial intelligence (AI) and machine learning (ML) systems in socially consequential domains has raised growing concerns about their trustworthiness, including potential discriminatory behaviours. Research in algorithmic fairness has generated a proliferation of mathematical definitions and metrics, yet persistent misconceptions and limitations -- both within and beyond the fairness community -- limit their effectiveness, such as an unreached consensus on its understanding, prevailing measures primarily tailored to binary group settings, and superficial handling for intersectional contexts. Here we critically remark on these misconceptions and argue that fairness cannot be reduced to purely technical constraints on models; we also examine the limitations of existing fairness measures through conceptual analysis and empirical illustrations, showing their limited applicability in the face of complex real-world scenarios, challenging prevailing views on the incompatibility between accuracy and fairness as well as that among fairness measures themselves, and outlining three worth-considering principles in the design of fairness measures. We believe these findings will help bridge the gap between technical formalisation and social realities and meet the challenges of real-world AI/ML deployment.
- Europe > Denmark > Capital Region > Copenhagen (0.14)
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Government (0.93)
- Law > Civil Rights & Constitutional Law (0.92)
Imputation Strategies Under Clinical Presence: Impact on Algorithmic Fairness
Jeanselme, Vincent, De-Arteaga, Maria, Zhang, Zhe, Barrett, Jessica, Tom, Brian
Machine learning risks reinforcing biases present in data, and, as we argue in this work, in what is absent from data. In healthcare, biases have marked medical history, leading to unequal care affecting marginalised groups. Patterns in missing data often reflect these group discrepancies, but the algorithmic fairness implications of group-specific missingness are not well understood. Despite its potential impact, imputation is often an overlooked preprocessing step, with attention placed on the reduction of reconstruction error and overall performance, ignoring how imputation can affect groups differently. Our work studies how imputation choices affect reconstruction errors across groups and algorithmic fairness properties of downstream predictions.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Oceania > New Zealand (0.04)
- North America > United States > Virginia (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (0.92)
- Information Technology > Data Science > Data Mining (0.93)
- Information Technology > Data Science > Data Quality (0.90)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.68)
Grand Theft Auto VI will feature the video game franchise's first playable female protagonist
The highly-anticipated next installment of the Grand Theft Auto video game franchise will feature a playable female protagonist for the first time, according to a report. Developer Rockstar Games first announced it was working on GTA VI earlier this year, writing in a February statement that'active development of the next entry in the Grand Theft Auto series is well under way'. Now a report in Bloomberg reveals that GTA VI will be the first to let players take on the role of a female lead character in its story mode. The woman, who is said to be'Latina', reportedly will be one of a pair of leading characters in a story influenced by the bank robbers Bonnie and Clyde. People familiar with the game told Bloomberg that developers are being cautious not to'punch down' by making jokes about marginalised groups, in contrast to previous games.
Fears AI may create sexist bigots as test learns 'toxic stereotypes'
Fears have been raised about the future of artificial intelligence after a robot was found to have learned'toxic stereotypes' from the internet. The machine showed significant gender and racial biases, including gravitating toward men over women and white people over people of colour during tests by scientists. It also jumped to conclusions about peoples' jobs after a glance at their face. 'The robot has learned toxic stereotypes through these flawed neural network models,' said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins' Computational Interaction and Robotics Laboratory in Baltimore, Maryland. 'We're at risk of creating a generation of racist and sexist robots but people and organisations have decided it's OK to create these products without addressing the issues.'
Humanitarian aid guided by satellite data may harm marginalised groups
Satellite data can help policy-makers quickly identify areas of the world in need of aid and development, but research shows it can also contain bias against marginalised groups, potentially compromising policy goals. Machine-learning systems that scan satellite images for indicators of poverty or disaster damage are becoming a popular tool for assessing humanitarian and development needs. But Lukas Kondmann and Xiao Xiang Zhu at the German Aerospace Center in Cologne say little attention is being paid to potential biases built into this data.