Goto

Collaborating Authors

 bern



The Cost of Learning under Multiple Change Points

Gafni, Tomer, Iyengar, Garud, Zeevi, Assaf

arXiv.org Machine Learning

We consider an online learning problem in environments with multiple change points. In contrast to the single change point problem that is widely studied using classical "high confidence" detection schemes, the multiple change point environment presents new learning-theoretic and algorithmic challenges. Specifically, we show that classical methods may exhibit catastrophic failure (high regret) due to a phenomenon we refer to as endogenous confounding. To overcome this, we propose a new class of learning algorithms dubbed Anytime Tracking CUSUM (ATC). These are horizon-free online algorithms that implement a selective detection principle, balancing the need to ignore "small" (hard-to-detect) shifts, while reacting "quickly" to significant ones. We prove that the performance of a properly tuned ATC algorithm is nearly minimax-optimal; its regret is guaranteed to closely match a novel information-theoretic lower bound on the achievable performance of any learning algorithm in the multiple change point problem. Experiments on synthetic as well as real-world data validate the aforementioned theoretical findings.




ShortListing Model: A Streamlined SimplexDiffusion for Discrete Variable Generation

Song, Yuxuan, Zhang, Zhe, Pei, Yu, Gong, Jingjing, Yu, Qiying, Zhang, Zheng, Wang, Mingxuan, Zhou, Hao, Liu, Jingjing, Ma, Wei-Ying

arXiv.org Artificial Intelligence

Generative modeling of discrete variables is challenging yet crucial for applications in natural language processing and biological sequence design. We introduce the Shortlisting Model (SLM), a novel simplex-based diffusion model inspired by progressive candidate pruning. SLM operates on simplex centroids, reducing generation complexity and enhancing scalability. Additionally, SLM incorporates a flexible implementation of classifier-free guidance, enhancing unconditional generation performance. Extensive experiments on DNA promoter and enhancer design, protein design, character-level and large-vocabulary language modeling demonstrate the competitive performance and strong potential of SLM. Our code can be found at https://github.com/GenSI-THUAIR/SLM





A Broader Impact and Limitation Discussion

Neural Information Processing Systems

We provide all missing proofs in this section. We prove the statement by contradiction. Next we show the proof for the second half. Now we show the last piece of the statement by construction. We prove the statement via three main steps.


Probabilistic Stability Guarantees for Feature Attributions

Jin, Helen, Xue, Anton, You, Weiqiu, Goel, Surbhi, Wong, Eric

arXiv.org Artificial Intelligence

Stability guarantees have emerged as a principled way to evaluate feature attributions, but existing certification methods rely on heavily smoothed classifiers and often produce conservative guarantees. To address these limitations, we introduce soft stability and propose a simple, model-agnostic, sample-efficient stability certification algorithm (SCA) that yields non-trivial and interpretable guarantees for any attribution method. Moreover, we show that mild smoothing achieves a more favorable trade-off between accuracy and stability, avoiding the aggressive compromises made in prior certification methods. To explain this behavior, we use Boolean function analysis to derive a novel characterization of stability under smoothing. We evaluate SCA on vision and language tasks and demonstrate the effectiveness of soft stability in measuring the robustness of explanation methods.