Goto

Collaborating Authors

 algorithmic discrimination


Underrepresentation, Label Bias, and Proxies: Towards Data Bias Profiles for the EU AI Act and Beyond

Ceccon, Marina, Cornacchia, Giandomenico, Pezze, Davide Dalle, Fabris, Alessandro, Susto, Gian Antonio

arXiv.org Machine Learning

Undesirable biases encoded in the data are key drivers of algorithmic discrimination. Their importance is widely recognized in the algorithmic fairness literature, as well as legislation and standards on anti-discrimination in AI. Despite this recognition, data biases remain understudied, hindering the development of computational best practices for their detection and mitigation. In this work, we present three common data biases and study their individual and joint effect on algorithmic discrimination across a variety of datasets, models, and fairness measures. We find that underrepresentation of vulnerable populations in training sets is less conducive to discrimination than conventionally affirmed, while combinations of proxies and label bias can be far more critical. Consequently, we develop dedicated mechanisms to detect specific types of bias, and combine them into a preliminary construct we refer to as the Data Bias Profile (DBP). This initial formulation serves as a proof of concept for how different bias signals can be systematically documented. Through a case study with popular fairness datasets, we demonstrate the effectiveness of the DBP in predicting the risk of discriminatory outcomes and the utility of fairness-enhancing interventions. Overall, this article bridges algorithmic fairness research and anti-discrimination policy through a data-centric lens.


Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness

Deck, Luca, Müller, Jan-Laurin, Braun, Conradin, Zipperling, Domenique, Kühl, Niklas

arXiv.org Artificial Intelligence

The topic of fairness in AI, as debated in the FATE (Fairness, Accountability, Transparency, and Ethics in AI) communities, has sparked meaningful discussions in the past years. However, from a legal perspective, particularly from the perspective of European Union law, many open questions remain. Whereas algorithmic fairness aims to mitigate structural inequalities at design-level, European non-discrimination law is tailored to individual cases of discrimination after an AI model has been deployed. The AI Act might present a tremendous step towards bridging these two approaches by shifting non-discrimination responsibilities into the design stage of AI models. Based on an integrative reading of the AI Act, we comment on legal as well as technical enforcement problems and propose practical implications on bias detection and bias correction in order to specify and comply with specific technical requirements.


Uncovering Algorithmic Discrimination: An Opportunity to Revisit the Comparator

Alvarez, Jose M., Ruggieri, Salvatore

arXiv.org Artificial Intelligence

Causal reasoning, in particular, counterfactual reasoning plays a central role in testing for discrimination. Counterfactual reasoning materializes when testing for discrimination, what is known as the counterfactual model of discrimination, when we compare the discrimination comparator with the discrimination complainant, where the comparator is a similar (or similarly situated) profile to that of the complainant used for testing the discrimination claim of the complainant. In this paper, we revisit the comparator by presenting two kinds of comparators based on the sort of causal intervention we want to represent. We present the ceteris paribus and the mutatis mutandis comparator, where the former is the standard and the latter is a new kind of comparator. We argue for the use of the mutatis mutandis comparator, which is built on the fairness given the difference notion, for testing future algorithmic discrimination cases.


Kamala Harris: Admin has duty to stop AI 'algorithmic discrimination,' ensure benefits 'shared equitably'

FOX News

AI expert Marva Bailer explains how, even though there are currently laws in place, the average person has more access than ever to create deepfakes of celebrities. Vice President Kamala Harris said Monday that it's the Biden administration's "duty" to prevent "algorithmic discrimination" when it comes to the field artificial intelligence (AI), and to ensure its benefits are "shared equitably" among society. Her continuation of what some have called the administration's effort to make AI "woke" happened during her remarks alongside President Biden at the White House just before he signed an executive order establishing AI standards for private companies. "I believe we have a moral, ethical and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensure that everyone is able to enjoy its benefits. Since we took office, President Biden and I have worked to uphold that duty," Harris told a crowd gathered in the White House's East Room.


Biden administration pushing to make AI woke, adhere to far-left agenda: watchdog

FOX News

The president speaks after meeting with AI experts in effort to manage its risks. The Biden administration is actively seeking to use artificial intelligence to promote a woke, progressive ideology with left-wing activists leading the effort, according to research from a conservative watchdog group. The American Accountability Foundation conducted research into the administration's plans for AI and is now warning in a memo that top U.S. officials under President Biden are seeking to inject "dangerous ideologies" into AI systems. "Under the guise of fighting'algorithmic discrimination' and'harmful bias,' the Biden administration is trying to rig AI to follow the woke left's rules," AAF president Tom Jones told Fox News Digital. "Biden is being advised on technology policy, not by scientists, but by racially obsessed social academics and activists. We're already seen the biggest tech firms in the world, like Google under Eric Schmidt, use their power to push the left's agenda. This would take the tech/woke alliance to a whole new, truly terrifying level."


Visual Analysis of Discrimination in Machine Learning

Wang, Qianwen, Xu, Zhenhua, Chen, Zhutian, Wang, Yong, Liu, Shixia, Qu, Huamin

arXiv.org Artificial Intelligence

The growing use of automated decision-making in critical applications, such as crime prediction and college admission, has raised questions about fairness in machine learning. How can we decide whether different treatments are reasonable or discriminatory? In this paper, we investigate discrimination in machine learning from a visual analytics perspective and propose an interactive visualization tool, DiscriLens, to support a more comprehensive analysis. To reveal detailed information on algorithmic discrimination, DiscriLens identifies a collection of potentially discriminatory itemsets based on causal modeling and classification rules mining. By combining an extended Euler diagram with a matrix-based visualization, we develop a novel set visualization to facilitate the exploration and interpretation of discriminatory itemsets. A user study shows that users can interpret the visually encoded information in DiscriLens quickly and accurately. Use cases demonstrate that DiscriLens provides informative guidance in understanding and reducing algorithmic discrimination.


D.C. wants to lead the fight against AI bias

#artificialintelligence

The document describes five principles that should be incorporated into AI systems to ensure their safety and transparency, limit the impact of algorithmic discrimination, and give users control over data. The document describes five principles that should be incorporated into AI systems to ensure their safety and transparency, limit the impact of algorithmic discrimination, and give users control over data.


FTC Mulls New Artificial Intelligence Regulation

#artificialintelligence

The Federal Trade Commission (FTC) is considering a wide range of options, including new rules and guidelines, to tackle data privacy concerns and algorithmic discrimination. FTC s Chair Lina Khan, in a letter to Senator Richard Blumenthal (D-CT), outlined her goals to "protect Americans from unfair or deceptive practices online" and in particular, Khan said that the FTC is considering rulemaking to address "lax security practices, data privacy abuses and algorithmic decision-making that may result in unlawful discrimination." The FTC s letter comes in response to a letter from several lawmakers, including Senator Blumenthal, who urged the FTC to start a rulemaking process that would "protect consumer privacy, promote civil rights and set clear safeguards on the collection and use of data in the digital economy." "Rulemaking may prove a useful tool to address the breadth of challenges that can result from commercial surveillance and other data practices […] and could establish clear market-wide requirements," Khan wrote. The FTC can resort to its rulemaking authority to address unfair or deceptive practices that occur commonly, instead of relying on actions against individual companies.


Addressing Algorithmic Discrimination

Communications of the ACM

It should no longer be a surprise that algorithms can discriminate. A criminal risk-assessment algorithm is far more likely to erroneously predict a Black defendant will commit a crime in the future than a white defendant.2 Ad-targeting algorithms promote job opportunities to race- and gender-skewed audiences, showing secretary and supermarket job ads to far more women than men.1 A hospital's resource-allocation algorithm favored white over Black patients with the same level of medical need.5 Algorithmic discrimination is particularly troubling when it affects consequential social decisions, such as who gets released from jail, or has access to a loan or health care. Employment is a prime example. Employers are increasingly relying on algorithmic tools to recruit, screen, and select job applicants by making predictions about which candidates will be good employees.


EU: Artificial Intelligence Regulation Threatens Social Safety Net, Warns HRW

#artificialintelligence

The European Union's plan to regulate artificial intelligence is ill-equipped to protect people from flawed algorithms that deprive them of lifesaving benefits and discriminate against vulnerable populations, Human Rights Watch said in report on the regulation. The European Parliament should amend the regulation to better protect people's rights to social security and an adequate standard of living. The 28-page report in the form of a question-and-answer document, "How the EU's Flawed Artificial Intelligence Regulation Endangers the Social Safety Net," examines how governments are turning to algorithms to allocate social security support and prevent benefits fraud. Drawing on case studies in Ireland, France, the Netherlands, Austria, Poland, and the United Kingdom, Human Rights Watch found that this trend toward automation can discriminate against people who need social security support, compromise their privacy, and make it harder for them to qualify for government assistance. But the regulation will do little to prevent or rectify these harms.