Goto

Collaborating Authors

 injustice


Fairness of Automatic Speech Recognition: Looking Through a Philosophical Lens

Choi, Anna Seo Gyeong, Choi, Hoon

arXiv.org Artificial Intelligence

Automatic Speech Recognition (ASR) systems now mediate countless human-technology interactions, yet research on their fairness implications remains surprisingly limited. This paper examines ASR bias through a philosophical lens, arguing that systematic misrecognition of certain speech varieties constitutes more than a technical limitation -- it represents a form of disrespect that compounds historical injustices against marginalized linguistic communities. We distinguish between morally neutral classification (discriminate1) and harmful discrimination (discriminate2), demonstrating how ASR systems can inadvertently transform the former into the latter when they consistently misrecognize non-standard dialects. We identify three unique ethical dimensions of speech technologies that differentiate ASR bias from other algorithmic fairness concerns: the temporal burden placed on speakers of non-standard varieties ("temporal taxation"), the disruption of conversational flow when systems misrecognize speech, and the fundamental connection between speech patterns and personal/cultural identity. These factors create asymmetric power relationships that existing technical fairness metrics fail to capture. The paper analyzes the tension between linguistic standardization and pluralism in ASR development, arguing that current approaches often embed and reinforce problematic language ideologies. We conclude that addressing ASR bias requires more than technical interventions; it demands recognition of diverse speech varieties as legitimate forms of expression worthy of technological accommodation. This philosophical reframing offers new pathways for developing ASR systems that respect linguistic diversity and speaker autonomy.


A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure

Mollema, Warmhold Jan Thomas

arXiv.org Artificial Intelligence

Epistemic injustice related to AI is a growing concern. In relation to machine learning models, epistemic injustice can have a diverse range of sources, ranging from epistemic opacity, the discriminatory automation of testimonial prejudice, and the distortion of human beliefs via generative AI's hallucinations to the exclusion of the global South in global AI governance, the execution of bureaucratic violence via algorithmic systems, and interactions with conversational artificial agents. Based on a proposed general taxonomy of epistemic injustice, this paper first sketches a taxonomy of the types of epistemic injustice in the context of AI, relying on the work of scholars from the fields of philosophy of technology, political philosophy and social epistemology. Secondly, an additional conceptualization on epistemic injustice in the context of AI is provided: generative hermeneutical erasure. I argue that this injustice the automation of 'epistemicide', the injustice done to epistemic agents in their capacity for collective sense-making through the suppression of difference in epistemology and conceptualization by LLMs. AI systems' 'view from nowhere' epistemically inferiorizes non-Western epistemologies and thereby contributes to the erosion of their epistemic particulars, gradually contributing to hermeneutical erasure. This work's relevance lies in proposal of a taxonomy that allows epistemic injustices to be mapped in the AI domain and the proposal of a novel form of AI-related epistemic injustice.


Review for NeurIPS paper: Robust Optimization for Fairness with Noisy Protected Groups

Neural Information Processing Systems

The ethical reviewers (ERs) were asked to respond to a series of questions. I provide summaries of the questions and responses below. They were also asked to make an accept/reject recommendation.


Epistemic Injustice in Generative AI

Kay, Jackie, Kasirzadeh, Atoosa, Mohamed, Shakir

arXiv.org Artificial Intelligence

While traditional discussions of epistemic injustice have While algorithms have traditionally been leveraged to primarily centered on interpersonal human interactions present and organize human-generated content, the advent (McKinnon 2017; Tsosie 2012), existing research on algorithmic of generative AI has started to fundamentally shift this epistemic injustice has largely been limited to epistemic paradigm. Generative AI models can now create content - injustices produced by decision-making and classification spanning text, imagery, and beyond - that resembles that of algorithms. However, we argue that the distinctive authors, journalists, painters, or photographers. In this paper, characteristics of generative AI give rise to novel forms of we take generative AI to be the class of machine learning epistemic injustice that necessitate a dedicated analytical models trained on massive amounts of data, typically media framework. To address this, we expand upon the established such as text, images, audio or video, in order to produce philosophical discourse on epistemic injustice and introduce representative instances of such media (García-Peñalvo and an account of "generative algorithmic epistemic injustice," Vázquez-Ingelmo 2023).


Epistemological Bias As a Means for the Automated Detection of Injustices in Text

Andrews, Kenya, Chiazor, Lamogha

arXiv.org Artificial Intelligence

Injustice occurs when someone experiences unfair treatment or their rights are violated and is often due to the presence of implicit biases and prejudice such as stereotypes. The automated identification of injustice in text has received little attention, due in part to the fact that underlying implicit biases or stereotypes are rarely explicitly stated and that instances often occur unconsciously due to the pervasive nature of prejudice in society. Here, we describe a novel framework that combines the use of a fine-tuned BERT-based bias detection model, two stereotype detection models, and a lexicon-based approach to show that epistemological biases (i.e., words, which presupposes, entails, asserts, hedges, or boosts text to erode or assert a person's capacity as a knower) can assist with the automatic detection of injustice in text. The news media has many instances of injustice (i.e. discriminatory narratives), thus it is our use case here. We conduct and discuss an empirical qualitative research study which shows how the framework can be applied to detect injustices, even at higher volumes of data.


Why Algorithms Remain Unjust: Power Structures Surrounding Algorithmic Activity

Balch, Andrew

arXiv.org Artificial Intelligence

Algorithms play an increasingly-significant role in our social lives. Unfortunately, they often perpetuate social injustices while doing so. The popular means of addressing these algorithmic injustices has been through algorithmic reformism: fine-tuning the algorithm itself to be more fair, accountable, and transparent. While commendable, the emerging discipline of critical algorithm studies shows that reformist approaches have failed to curtail algorithmic injustice because they ignore the power structure surrounding algorithms. Heeding calls from critical algorithm studies to analyze this power structure, I employ a framework developed by Erik Olin Wright to examine the configuration of power surrounding Algorithmic Activity: the ways in which algorithms are researched, developed, trained, and deployed within society. I argue that the reason Algorithmic Activity is unequal, undemocratic, and unsustainable is that the power structure shaping it is one of economic empowerment rather than social empowerment. For Algorithmic Activity to be socially just, we need to transform this power configuration to empower the people at the other end of an algorithm. To this end, I explore Wright's symbiotic, interstitial, and raptural transformations in the context of Algorithmic Activity, as well as how they may be applied in a hypothetical research project that uses algorithms to address a social issue. I conclude with my vision for socially just Algorithmic Activity, asking that future work strives to integrate the proposed transformations and develop new mechanisms for social empowerment.


Diversity and Language Technology: How Techno-Linguistic Bias Can Cause Epistemic Injustice

Helm, Paula, Bella, Gábor, Koch, Gertraud, Giunchiglia, Fausto

arXiv.org Artificial Intelligence

It is well known that AI-based language technology -- large language models, machine translation systems, multilingual dictionaries, and corpora -- is currently limited to 2 to 3 percent of the world's most widely spoken and/or financially and politically best supported languages. In response, recent research efforts have sought to extend the reach of AI technology to ``underserved languages.'' In this paper, we show that many of these attempts produce flawed solutions that adhere to a hard-wired representational preference for certain languages, which we call techno-linguistic bias. Techno-linguistic bias is distinct from the well-established phenomenon of linguistic bias as it does not concern the languages represented but rather the design of the technologies. As we show through the paper, techno-linguistic bias can result in systems that can only express concepts that are part of the language and culture of dominant powers, unable to correctly represent concepts from other communities. We argue that at the root of this problem lies a systematic tendency of technology developer communities to apply a simplistic understanding of diversity which does not do justice to the more profound differences that languages, and ultimately the communities that speak them, embody. Drawing on the concept of epistemic injustice, we point to the broader sociopolitical consequences of the bias we identify and show how it can lead not only to a disregard for valuable aspects of diversity but also to an under-representation of the needs and diverse worldviews of marginalized language communities.


On With Kara Swisher: Reid Hoffman on Why AI Is Our Co-pilot

#artificialintelligence

Kara Swisher has gotten to know a lot of tech-industry people over the years, and as she explains to producer Nayeema Raza in this episode of On With Kara Swisher, she knows "the difference between jerks and people who really actually do care about something bigger than themselves." Kara wholeheartedly believes LinkedIn co-founder Reid Hoffman falls into the latter camp, even if the two of them don't always agree about the benefits and harms of new technologies such as artificial intelligence. Hoffman is an AI evangelist who is knee-deep in that world (including, until he recently stepped down, being on the board of OpenAI, the nonprofit behind ChatGPT and GPT-4), while Kara looks at the current AI frenzy and sees storm clouds ahead. During her conversation with Hoffman, Kara asks the longtime tech entrepreneur and investor for his thoughts on a range of topics, from the collapse of Silicon Valley Bank to his political advocacy and ongoing fears about Donald Trump. She also grills Hoffman about his seemingly unflinching tech optimism; in the condensed segment below, she asks him to make his best case for several new AI-based technologies as well as explain what does, in fact, worry him about how AI could go wrong. Journalist Kara Swisher brings the news and newsmakers to you twice a week, on Mondays and Thursdays.


Managing the risks of inevitably biased visual artificial intelligence systems

#artificialintelligence

Scientists have long been developing machines that attempt to imitate the human brain. Just as humans are exposed to systemic injustices, machines learn human-like stereotypes and cultural norms from sociocultural data, acquiring biases and associations in the process. Our research shows that bias is not only reflected in the patterns of language, but also in the image datasets used to train computer vision models. As a result, widely used computer vision models such as iGPT and DALL-E 2 generate new explicit and implicit characterizations and stereotypes that perpetuate existing biases about social groups, which further shape human cognition. Such computer vision models are used in downstream applications for security, surveillance, job candidate assessment, border control, and information retrieval.


Toward Justice in Computer Science through Community, Criticality, and Citizenship

Communications of the ACM

Neither technologies nor societies are neutral, and failing to acknowledge this, results at best, in a narrow view of both. At worst, it leads to technology that reinforces oppressive societal norms. We agree with Alex Hanna, Timnit Gebru, and others who argue individual harms reflect institutional problems, and thus require institutional and systemic solutions. We believe computer science (CS) as a discipline often promotes itself as objective and neutral. This tendency allows the field to ignore systems of oppression that exist within and because of CS. As scholars in educational psychology, computer science education, and social studies education, we suggest a way forward through institutional change, specifically in the way we teach CS.