Goto

Collaborating Authors

 infl


Argumentative Debates for Transparent Bias Detection [Technical Report]

Ayoobi, Hamed, Potyka, Nico, Rapberger, Anna, Toni, Francesca

arXiv.org Artificial Intelligence

As the use of AI in society grows, addressing emerging biases is essential to prevent systematic discrimination. Several bias detection methods have been proposed, but, with few exceptions, these tend to ignore transparency. Instead, interpretability and explainability are core requirements for algorithmic fairness, even more so than for other algorithmic solutions, given the human-oriented nature of fairness. We present ABIDE (Argumentative BIas detection by DEbate), a novel framework that structures bias detection transparently as debate, guided by an underlying argument graph as understood in (formal and computational) argumentation. The arguments are about the success chances of groups in local neighbourhoods and the significance of these neighbourhoods. We evaluate ABIDE experimentally and demonstrate its strengths in performance against an argumentative baseline.


Supplemental Material for What Neural Networks Memorize and Why A Proof of Lemma 2.1

Neural Information Processing Systems

We now compute the expected squared error of each of the terms of this estimator. In both cases the squared error is at most 1 / 4 . We implement our algorithms with Tensorflow [1]. Our implementation achieves 73% top-1 accuracy when trained on the full training set. For DenseNet, we halved the batch size and learning rate due to higher memory load of the architecture.


Supplemental Material for What Neural Networks Memorize and Why A Proof of Lemma 2.1

Neural Information Processing Systems

We now compute the expected squared error of each of the terms of this estimator. In both cases the squared error is at most 1 / 4 . We implement our algorithms with Tensorflow [1]. Our implementation achieves 73% top-1 accuracy when trained on the full training set. For DenseNet, we halved the batch size and learning rate due to higher memory load of the architecture.


Contestability in Quantitative Argumentation

Yin, Xiang, Potyka, Nico, Rago, Antonio, Kampik, Timotheus, Toni, Francesca

arXiv.org Artificial Intelligence

Contestable AI requires that AI-driven decisions align with human preferences. While various forms of argumentation have been shown to support contestability, Edge-Weighted Quantitative Bipolar Argumentation Frameworks (EW-QBAFs) have received little attention. In this work, we show how EW-QBAFs can be deployed for this purpose. Specifically, we introduce the contestability problem for EW-QBAFs, which asks how to modify edge weights (e.g., preferences) to achieve a desired strength for a specific argument of interest (i.e., a topic argument). To address this problem, we propose gradient-based relation attribution explanations (G-RAEs), which quantify the sensitivity of the topic argument's strength to changes in individual edge weights, thus providing interpretable guidance for weight adjustments towards contestability. Building on G-RAEs, we develop an iterative algorithm that progressively adjusts the edge weights to attain the desired strength. We evaluate our approach experimentally on synthetic EW-QBAFs that simulate the structural characteristics of personalised recommender systems and multi-layer perceptrons, and demonstrate that it can solve the problem effectively.


The Role of Social Support and Influencers in Social Media Communities

Su, Junwei, Marbach, Peter

arXiv.org Artificial Intelligence

How can individual agents coordinate their actions to achieve a shared objective in distributed systems? This challenge spans economic, technical, and sociological domains, each confronting scalability, heterogeneity, and conflicts between individual and collective goals. In economic markets, a common currency facilitates coordination, raising the question of whether such mechanisms can be applied in other contexts. This paper explores this idea within social media platforms, where social support (likes, shares, comments) acts as a currency that shapes content production and sharing. We investigate two key questions: (1) Can social support serve as an effective coordination tool, and (2) What role do influencers play in content creation and dissemination? Our formal analysis shows that social support can coordinate user actions similarly to money in economic markets. Influencers serve dual roles, aggregating content and acting as information proxies, guiding content producers in large markets. While imperfections in information lead to a "price of influence" and suboptimal outcomes, this price diminishes as markets grow, improving social welfare. These insights provide a framework for understanding coordination in distributed environments, with applications in both sociological systems and multi-agent AI systems.


Hunting for Discriminatory Proxies in Linear Regression Models

Yeom, Samuel, Datta, Anupam, Fredrikson, Matt

arXiv.org Machine Learning

A machine learning model may exhibit discrimination when used to make decisions involving people. One potential cause for such outcomes is that the model uses a statistical proxy for a protected demographic attribute. In this paper we formulate a definition of proxy use for the setting of linear regression and present algorithms for detecting proxies. Our definition follows recent work on proxies in classification models, and characterizes a model's constituent behavior that: 1) correlates closely with a protected random variable, and 2) is causally influential in the overall behavior of the model. We show that proxies in linear regression models can be efficiently identified by solving a second-order cone program, and further extend this result to account for situations where the use of a certain input variable is justified as a "business necessity". Finally, we present empirical results on two law enforcement datasets that exhibit varying degrees of racial disparity in prediction outcomes, demonstrating that proxies shed useful light on the causes of discriminatory behavior in models.