Goto

Collaborating Authors

 thatis


RECLAIM: Cyclic Causal Discovery Amid Measurement Noise

Sethuraman, Muralikrishnna G., Fekri, Faramarz

arXiv.org Machine Learning

Uncovering causal relationships is a fundamental problem across science and engineering. However, most existing causal discovery methods assume acyclicity and direct access to the system variables -- assumptions that fail to hold in many real-world settings. For instance, in genomics, cyclic regulatory networks are common, and measurements are often corrupted by instrumental noise. To address these challenges, we propose RECLAIM, a causal discovery framework that natively handles both cycles and measurement noise. RECLAIM learns the causal graph structure by maximizing the likelihood of the observed measurements via expectation-maximization (EM), using residual normalizing flows for tractable likelihood computation. We consider two measurement models: (i) Gaussian additive noise, and (ii) a linear measurement system with additive Gaussian noise. We provide theoretical consistency guarantees for both the settings. Experiments on synthetic data and real-world protein signaling datasets demonstrate the efficacy of the proposed method.



Safe Distributionally Robust Feature Selection under Covariate Shift

Hanada, Hiroyuki, Akahane, Satoshi, Hashimoto, Noriaki, Takeno, Shion, Takeuchi, Ichiro

arXiv.org Machine Learning

In practical machine learning, the environments encountered during the model development and deployment phases often differ, especially when a model is used by many users in diverse settings. Learning models that maintain reliable performance across plausible deployment environments is known as distributionally robust (DR) learning. In this work, we study the problem of distributionally robust feature selection (DRFS), with a particular focus on sparse sensing applications motivated by industrial needs. In practical multi-sensor systems, a shared subset of sensors is typically selected prior to deployment based on performance evaluations using many available sensors. At deployment, individual users may further adapt or fine-tune models to their specific environments. When deployment environments differ from those anticipated during development, this strategy can result in systems lacking sensors required for optimal performance. To address this issue, we propose safe-DRFS, a novel approach that extends safe screening from conventional sparse modeling settings to a DR setting under covariate shift. Our method identifies a feature subset that encompasses all subsets that may become optimal across a specified range of input distribution shifts, with finite-sample theoretical guarantees of no false feature elimination.



050f8591be3874b52fdac4e1060eeb29-Supplemental-Conference.pdf

Neural Information Processing Systems

We study a generalization of boosting to the multiclass setting. We introduce a weak learning condition for multiclass classification that captures the original notion ofweak learnability asbeing "slightly better than random guessing".





An adaptive nearest neighbor rule for classification

Akshay Balsubramani, Sanjoy Dasgupta, yoav Freund, Shay Moran

Neural Information Processing Systems

Findthesmallest0 (n, k, ), where (n, k, )= c1 r logn+ log ( 1/ ) k . Then, withprobabilityatleast1 , theresultingclassifiergn satisfiesthefollowing: foreverypointx 2 supp(µ), if n C adv (x) max log 1 adv (x) , log 1 thengn(x)= g (x).