Goto

Collaborating Authors

 weber


On a Geometry of Interbrain Networks

Hinrichs, Nicolás, Guzmán, Noah, Weber, Melanie

arXiv.org Artificial Intelligence

Traditional metrics of interbrain synchrony in social neuroscience typically depend on fixed, correlation-based approaches, restricting their explanatory capacity to descriptive observations. Inspired by the successful integration of geometric insights in network science, we propose leveraging discrete geometry to examine the dynamic reconfigurations in neural interactions during social exchanges. Unlike conventional synchrony approaches, our method interprets inter-brain connectivity changes through the evolving geometric structures of neural networks. This geometric framework is realized through a pipeline that identifies critical transitions in network connectivity using entropy metrics derived from curvature distributions. By doing so, we significantly enhance the capacity of hyperscanning methodologies to uncover underlying neural mechanisms in interactive social behavior.


Leveraging Large Language Models for Tacit Knowledge Discovery in Organizational Contexts

Zuin, Gianlucca, Mastelini, Saulo, Loures, Túlio, Veloso, Adriano

arXiv.org Artificial Intelligence

Documenting tacit knowledge in organizations can be a challenging task due to incomplete initial information, difficulty in identifying knowledgeable individuals, the interplay of formal hierarchies and informal networks, and the need to ask the right questions. To address this, we propose an agent-based framework leveraging large language models (LLMs) to iteratively reconstruct dataset descriptions through interactions with employees. Modeling knowledge dissemination as a Susceptible-Infectious (SI) process with waning infectivity, we conduct 864 simulations across various synthetic company structures and different dissemination parameters. Our results show that the agent achieves 94.9% full-knowledge recall, with self-critical feedback scores strongly correlating with external literature critic scores. We analyze how each simulation parameter affects the knowledge retrieval process for the agent. In particular, we find that our approach is able to recover information without needing to access directly the only domain specialist. These findings highlight the agent's ability to navigate organizational complexity and capture fragmented knowledge that would otherwise remain inaccessible.


Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models

Idrissi, Marouane Il, Machado, Agathe Fernandes, Charpentier, Arthur

arXiv.org Machine Learning

Cooperative game theory has become a cornerstone of post-hoc interpretability in machine learning, largely through the use of Shapley values. Y et, despite their widespread adoption, Shapley-based methods often rest on axiomatic justifications whose relevance to feature attribution remains debatable. In this paper, we revisit cooperative game theory from an interpretability perspective and argue for a broader and more principled use of its tools. We highlight two general families of efficient allocations, the Weber and Harsanyi sets, that extend beyond Shapley values and offer richer interpretative flexibility. We present an accessible overview of these allocation schemes, clarify the distinction between value functions and aggregation rules, and introduce a three-step blueprint for constructing reliable and theoretically-grounded feature attributions. Our goal is to move beyond fixed axioms and provide the XAI community with a coherent framework to design attribution methods that are both meaningful and robust to shifting methodological trends.


DISCO: Internal Evaluation of Density-Based Clustering

Beer, Anna, Krieger, Lena, Weber, Pascal, Ritzert, Martin, Assent, Ira, Plant, Claudia

arXiv.org Machine Learning

In density-based clustering, clusters are areas of high object density separated by lower object density areas. This notion supports arbitrarily shaped clusters and automatic detection of noise points that do not belong to any cluster. However, it is challenging to adequately evaluate the quality of density-based clustering results. Even though some existing cluster validity indices (CVIs) target arbitrarily shaped clusters, none of them captures the quality of the labeled noise. In this paper, we propose DISCO, a Density-based Internal Score for Clustering Outcomes, which is the first CVI that also evaluates the quality of noise labels. DISCO reliably evaluates density-based clusters of arbitrary shape by assessing compactness and separation. It also introduces a direct assessment of noise labels for any given clustering. Our experiments show that DISCO evaluates density-based clusterings more consistently than its competitors. It is additionally the first method to evaluate the complete labeling of density-based clustering methods, including noise labels.


SHADE: Deep Density-based Clustering

Beer, Anna, Weber, Pascal, Miklautz, Lukas, Leiber, Collin, Durani, Walid, Böhm, Christian, Plant, Claudia

arXiv.org Artificial Intelligence

Detecting arbitrarily shaped clusters in high-dimensional noisy data is challenging for current clustering methods. We introduce SHADE (Structure-preserving High-dimensional Analysis with Density-based Exploration), the first deep clustering algorithm that incorporates density-connectivity into its loss function. Similar to existing deep clustering algorithms, SHADE supports high-dimensional and large data sets with the expressive power of a deep autoencoder. In contrast to most existing deep clustering methods that rely on a centroid-based clustering objective, SHADE incorporates a novel loss function that captures density-connectivity. SHADE thereby learns a representation that enhances the separation of density-connected clusters. SHADE detects a stable clustering and noise points fully automatically without any user input. It outperforms existing methods in clustering quality, especially on data that contain non-Gaussian clusters, such as video data. Moreover, the embedded space of SHADE is suitable for visualization and interpretation of the clustering results as the individual shapes of the clusters are preserved.


Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization

Mokander, Jakob, Schroeder, Ralph

arXiv.org Artificial Intelligence

The use of artificial intelligence (AI) in the public sector is best understood as a continuation and intensification of long standing rationalization and bureaucratization processes. Drawing on Weber, we take the core of these processes to be the replacement of traditions with instrumental rationality, i.e., the most calculable and efficient way of achieving any given policy objective. In this article, we demonstrate how much of the criticisms, both among the public and in scholarship, directed towards AI systems spring from well known tensions at the heart of Weberian rationalization. To illustrate this point, we introduce a thought experiment whereby AI systems are used to optimize tax policy to advance a specific normative end, reducing economic inequality. Our analysis shows that building a machine-like tax system that promotes social and economic equality is possible. However, it also highlights that AI driven policy optimization (i) comes at the exclusion of other competing political values, (ii) overrides citizens sense of their noninstrumental obligations to each other, and (iii) undermines the notion of humans as self-determining beings. Contemporary scholarship and advocacy directed towards ensuring that AI systems are legal, ethical, and safe build on and reinforce central assumptions that underpin the process of rationalization, including the modern idea that science can sweep away oppressive systems and replace them with a rule of reason that would rescue humans from moral injustices. That is overly optimistic. Science can only provide the means, they cannot dictate the ends. Nonetheless, the use of AI in the public sector can also benefit the institutions and processes of liberal democracies. Most importantly, AI driven policy optimization demands that normative ends are made explicit and formalized, thereby subjecting them to public scrutiny and debate.


HistoHDR-Net: Histogram Equalization for Single LDR to HDR Image Translation

Barua, Hrishav Bakul, Krishnasamy, Ganesh, Wong, KokSheik, Dhall, Abhinav, Stefanov, Kalin

arXiv.org Artificial Intelligence

High Dynamic Range (HDR) imaging aims to replicate the high visual quality and clarity of real-world scenes. Due to the high costs associated with HDR imaging, the literature offers various data-driven methods for HDR image reconstruction from Low Dynamic Range (LDR) counterparts. A common limitation of these approaches is missing details in regions of the reconstructed HDR images, which are over- or under-exposed in the input LDR images. To this end, we propose a simple and effective method, HistoHDR-Net, to recover the fine details (e.g., color, contrast, saturation, and brightness) of HDR images via a fusion-based approach utilizing histogram-equalized LDR images along with self-attention guidance. Our experiments demonstrate the efficacy of the proposed approach over the state-of-art methods.


Compact Matrix Quantum Group Equivariant Neural Networks

Pearce-Crump, Edward

arXiv.org Machine Learning

We derive the existence of a new type of neural network, called a compact matrix quantum group equivariant neural network, that learns from data that has an underlying quantum symmetry. We apply the Woronowicz formulation of Tannaka-Krein duality to characterise the weight matrices that appear in these neural networks for any easy compact matrix quantum group. We show that compact matrix quantum group equivariant neural networks contain, as a subclass, all compact matrix group equivariant neural networks. Moreover, we obtain characterisations of the weight matrices for many compact matrix group equivariant neural networks that have not previously appeared in the machine learning literature.


New Mexico Is a Great Place for Sci-Fi

WIRED

Melinda Snodgrass is the novelist and screenwriter best known for her classic Star Trek: The Next Generation script "The Measure of a Man." Her latest novel, Lucifer's War, pits an unlikely band of heroes against a horde of Lovecraftian monsters that have been spreading fear and ignorance throughout human history. "It's unbelievable now, the kind of nonsense people are accepting, that's being pushed on them by social media," Snodgrass says in Episode 529 of the Geek's Guide to the Galaxy podcast. "I really wanted to make a stand for science and rationality, as opposed to magic and superstition." The book is set in Snodgrass' home state of New Mexico, a place where science and superstition clash in a particularly striking way. "It's a very weird place, where you have Los Alamos laboratory, Sandia laboratories, high-tech, high-energy centers," Snodgrass says, "Some of the finest scientific minds in the world come here to lecture and study and commune with each other, and then on the other side you have people who will balance your aura and sell you a crystal to deal with your cancer."


Weber

AAAI Conferences

A major challenge in the field of AI is combining symbolic and statistical techniques. My dissertation work aims to bridge this gap in the domain of real-time strategy games.

  Industry: Leisure & Entertainment > Games (0.85)