Borgesius, Frederik Zuiderveen
Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It
Hacker, Philipp, Mittelstadt, Brent, Borgesius, Frederik Zuiderveen, Wachter, Sandra
As generative Artificial Intelligence (genAI) technologies proliferate across sectors, they offer significant benefits but also risk exacerbating discrimination. This chapter explores how genAI intersects with non-discrimination laws, identifying shortcomings and suggesting improvements. It highlights two main types of discriminatory outputs: (i) demeaning and abusive content and (ii) subtler biases due to inadequate representation of protected groups, which may not be overtly discriminatory in individual cases but have cumulative discriminatory effects. For example, genAI systems may predominantly depict white men when asked for images of people in important jobs. This chapter examines these issues, categorizing problematic outputs into three legal categories: discriminatory content; harassment; and legally hard cases like unbalanced content, harmful stereotypes or misclassification. It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues. The chapter suggests updating EU laws, including the AI Act, to mitigate biases in training and input data, mandating testing and auditing, and evolving legislation to enforce standards for bias mitigation and inclusivity as technology advances.
Targeted and Troublesome: Tracking and Advertising on Children's Websites
Moti, Zahra, Senol, Asuman, Bostani, Hamid, Borgesius, Frederik Zuiderveen, Moonsamy, Veelasha, Mathur, Arunesh, Acar, Gunes
On the modern web, trackers and advertisers frequently construct and monetize users' detailed behavioral profiles without consent. Despite various studies on web tracking mechanisms and advertisements, there has been no rigorous study focusing on websites targeted at children. To address this gap, we present a measurement of tracking and (targeted) advertising on websites directed at children. Motivated by lacking a comprehensive list of child-directed (i.e., targeted at children) websites, we first build a multilingual classifier based on web page titles and descriptions. Applying this classifier to over two million pages, we compile a list of two thousand child-directed websites. Crawling these sites from five vantage points, we measure the prevalence of trackers, fingerprinting scripts, and advertisements. Our crawler detects ads displayed on child-directed websites and determines if ad targeting is enabled by scraping ad disclosure pages whenever available. Our results show that around 90% of child-directed websites embed one or more trackers, and about 27% contain targeted advertisements--a practice that should require verifiable parental consent. Next, we identify improper ads on child-directed websites by developing an ML pipeline that processes both images and text extracted from ads. The pipeline allows us to run semantic similarity queries for arbitrary search terms, revealing ads that promote services related to dating, weight loss, and mental health; as well as ads for sex toys and flirting chat services. Some of these ads feature repulsive and sexually explicit imagery. In summary, our findings indicate a trend of non-compliance with privacy regulations and troubling ad safety practices among many advertisers and child-directed websites. To protect children and create a safer online environment, regulators and stakeholders must adopt and enforce more stringent measures.
Fairness and Bias in Algorithmic Hiring
Fabris, Alessandro, Baranowska, Nina, Dennis, Matthew J., Hacker, Philipp, Saldivar, Jorge, Borgesius, Frederik Zuiderveen, Biega, Asia J.
Employers are adopting algorithmic hiring technology throughout the recruitment pipeline. Algorithmic fairness is especially applicable in this domain due to its high stakes and structural inequalities. Unfortunately, most work in this space provides partial treatment, often constrained by two competing narratives, optimistically focused on replacing biased recruiter decisions or pessimistically pointing to the automation of discrimination. Whether, and more importantly what types of, algorithmic hiring can be less biased and more beneficial to society than low-tech alternatives currently remains unanswered, to the detriment of trustworthiness. This multidisciplinary survey caters to practitioners and researchers with a balanced and integrated coverage of systems, biases, measures, mitigation strategies, datasets, and legal aspects of algorithmic hiring and fairness. Our work supports a contextualized understanding and governance of this technology by highlighting current opportunities and limitations, providing recommendations for future work to ensure shared benefits for all stakeholders.
Demystifying the Draft EU Artificial Intelligence Act
Veale, Michael, Borgesius, Frederik Zuiderveen
Thanks to Valerio De Stefano, Reuben Binns, Jeremias Adams-Prassl, Barend van Leeuwen, Aislinn Kelly-Lyth, Lilian Edwards, Natali Helberger, Christopher Marsden, Sarah Chander, Corinne Cath-Speth for comments and/or discussion; substantive and editorial input by Ulrich Gasper; and the conveners and participants of several workshops including one convened by Margot Kaminski, one by Burkhard Schäfer, one part of the 2nd ELLIS Workshop in Human-Centric Machine Learning; one between Lund University and the Labour Law Community; and one between Oxford, KU Leuven and UCL. A CC-BY 4.0 license applies to this article after 3 calendar months from publication have elapsed.