Goto

Collaborating Authors

 independence


SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning

Neural Information Processing Systems

In order to overcome overestimation bias, ensemble methods for Q-learning have been investigated to exploit the diversity of multiple Q-functions. Since network initialization has been the predominant approach to promote diversity in Q-functions, heuristically designed diversity injection methods have been studied in the literature. However, previous studies have not attempted to approach guaranteed independence over an ensemble from a theoretical perspective.






Rare copy of Declaration of Independence going to auction

Popular Science

Only around 175 broadsides copies of the document are believed to exist. Breakthroughs, discoveries, and DIY tips sent six days a week. The Declaration of Independence is up for sale, but it will cost more than most of us can afford. Despite this, the edition offered by Goldin Auction originally printed and distributed so that colonists could read the Second Continental Congress' argument for separating from Great Britain in July 1776. The document is part of a collection of over 400 historic items scheduled for auction in May, and is set to coincide with the 250th anniversary of American independence .


A tensor network formalism for neuro-symbolic AI

Goessmann, Alex, Schütte, Janina, Fröhlich, Maximilian, Eigel, Martin

arXiv.org Machine Learning

The unification of neural and symbolic approaches to artificial intelligence remains a central open challenge. In this work, we introduce a tensor network formalism, which captures sparsity principles originating in the different approaches in tensor decompositions. In particular, we describe a basis encoding scheme for functions and model neural decompositions as tensor decompositions. The proposed formalism can be applied to represent logical formulas and probability distributions as structured tensor decompositions. This unified treatment identifies tensor network contractions as a fundamental inference class and formulates efficiently scaling reasoning algorithms, originating from probability theory and propositional logic, as contraction message passing schemes. The framework enables the definition and training of hybrid logical and probabilistic models, which we call Hybrid Logic Network. The theoretical concepts are accompanied by the python library tnreason, which enables the implementation and practical use of the proposed architectures.


Step-by-Step Causality: Transparent Causal Discovery with Multi-Agent Tree-Query and Adversarial Confidence Estimation

Ding, Ziyi, Ye-Hao, Chenfei, Wang, Zheyuan, Zhang, Xiao-Ping

arXiv.org Machine Learning

Causal discovery aims to recover ``what causes what'', but classical constraint-based methods (e.g., PC, FCI) suffer from error propagation, and recent LLM-based causal oracles often behave as opaque, confidence-free black boxes. This paper introduces Tree-Query, a tree-structured, multi-expert LLM framework that reduces pairwise causal discovery to a short sequence of queries about backdoor paths, (in)dependence, latent confounding, and causal direction, yielding interpretable judgments with robustness-aware confidence scores. Theoretical guarantees are provided for asymptotic identifiability of four pairwise relations. On data-free benchmarks derived from Mooij et al. and UCI causal graphs, Tree-Query improves structural metrics over direct LLM baselines, and a diet--weight case study illustrates confounder screening and stable, high-confidence causal conclusions. Tree-Query thus offers a principled way to obtain data-free causal priors from LLMs that can complement downstream data-driven causal discovery. Code is available at https://anonymous.4open.science/r/Repo-9B3E-4F96.


5 new quarters commemorate 250 years of American independence

Popular Science

The new designs honor the Constitution, Civil War, and more. Breakthroughs, discoveries, and DIY tips sent every weekday. While we've said goodbye to both the year 2025 and the penny, five new United States quarters will be finding their way into your pocket soon enough. The designs of each new quarter will honor the country's 250th anniversary (aka its semiquincentennial). According to a press release from the U.S. Mint, the coins "commemorate 250 years of American Liberty by reflecting our country's founding principles and honoring our Nation's history."


On Admissible Rank-based Input Normalization Operators

Kim, Taeyun

arXiv.org Machine Learning

Rank-based input normalization is a workhorse of modern machine learning, prized for its robustness to scale, monotone transformations, and batch-to-batch variation. In many real systems, the ordering of feature values matters far more than their raw magnitudes - yet the structural conditions that a rank-based normalization operator must satisfy to remain stable under these invariances have never been formally pinned down. We show that widely used differentiable sorting and ranking operators fundamentally fail these criteria. Because they rely on value gaps and batch-level pairwise interactions, they are intrinsically unstable under strictly monotone transformations, shifts in mini-batch composition, and even tiny input perturbations. Crucially, these failures stem from the operators' structural design, not from incidental implementation choices. To address this, we propose three axioms that formalize the minimal invariance and stability properties required of rank-based input normalization. We prove that any operator satisfying these axioms must factor into (i) a feature-wise rank representation and (ii) a scalarization map that is both monotone and Lipschitz-continuous. We then construct a minimal operator that meets these criteria and empirically show that the resulting constraints are non-trivial in realistic setups. Together, our results sharply delineate the design space of valid rank-based normalization operators and formally separate them from existing continuous-relaxation-based sorting methods.