Goto

Collaborating Authors

 Cameroon




A Appendix

Neural Information Processing Systems

The complete list may be seen in Table 8. Here are a few general notes about these strings: 1. Based on their recommendations, we did the following: 1. zh, zh_Latn: This resulted in the special filters described below. URLs) the corpora were in languages different from the LangID predictions. This is mainly mis-rendered PDFs and may have practical applications for denoising, or for decoding such garbled PDFs.





Thompson sampling: Precise arm-pull dynamics and adaptive inference

Han, Qiyang

arXiv.org Machine Learning

Adaptive sampling schemes are well known to create complex dependence that may invalidate conventional inference methods. A recent line of work shows that this need not be the case for UCB-type algorithms in multi-armed bandits. A central emerging theme is a `stability' property with asymptotically deterministic arm-pull counts in these algorithms, making inference as easy as in the i.i.d. setting. In this paper, we study the precise arm-pull dynamics in another canonical class of Thompson-sampling type algorithms. We show that the phenomenology is qualitatively different: the arm-pull count is asymptotically deterministic if and only if the arm is suboptimal or is the unique optimal arm; otherwise it converges in distribution to the unique invariant law of an SDE. This dichotomy uncovers a unifying principle behind many existing (in)stability results: an arm is stable if and only if its interaction with statistical noise is asymptotically negligible. As an application, we show that normalized arm means obey the same dichotomy, with Gaussian limits for stable arms and a semi-universal, non-Gaussian limit for unstable arms. This not only enables the construction of confidence intervals for the unknown mean rewards despite non-normality, but also reveals the potential of developing tractable inference procedures beyond the stable regime. The proofs rely on two new approaches. For suboptimal arms, we develop an `inverse process' approach that characterizes the inverse of the arm-pull count process via a Stieltjes integral. For optimal arms, we adopt a reparametrization of the arm-pull and noise processes that reduces the singularity in the natural SDE to proving the uniqueness of the invariant law of another SDE. We prove the latter by a set of analytic tools, including the parabolic Hörmander condition and the Stroock-Varadhan support theorem.


"Rebuilding" Statistics in the Age of AI: A Town Hall Discussion on Culture, Infrastructure, and Training

Donoho, David L., Kang, Jian, Lin, Xihong, Mukherjee, Bhramar, Nettleton, Dan, Nugent, Rebecca, Rodriguez, Abel, Xing, Eric P., Zheng, Tian, Zhu, Hongtu

arXiv.org Machine Learning

This article presents the full, original record of the 2024 Joint Statistical Meetings (JSM) town hall, "Statistics in the Age of AI," which convened leading statisticians to discuss how the field is evolving in response to advances in artificial intelligence, foundation models, large-scale empirical modeling, and data-intensive infrastructures. The town hall was structured around open panel discussion and extensive audience Q&A, with the aim of eliciting candid, experience-driven perspectives rather than formal presentations or prepared statements. This document preserves the extended exchanges among panelists and audience members, with minimal editorial intervention, and organizes the conversation around five recurring questions concerning disciplinary culture and practices, data curation and "data work," engagement with modern empirical modeling, training for large-scale AI applications, and partnerships with key AI stakeholders. By providing an archival record of this discussion, the preprint aims to support transparency, community reflection, and ongoing dialogue about the evolving role of statistics in the data- and AI-centric future.


On The Hidden Biases of Flow Matching Samplers

Lim, Soon Hoe

arXiv.org Machine Learning

The main goal of generative modeling is to use finitely many samples from a distribution to construct a sampling scheme capable of generating new samples from the same distribution. Among the families of existing generative models, flow matching (FM) [23, 24] is notable for its flexibility and simplicity. Given a target probability distribution, FM utilizes a parametric model (e.g., neural network) to learn the velocity vector field that defines a deterministic, continuous transformation (a normalizing flow) and transports a source probability distribution (e.g., standard Gaussian) to the target distribution. While the population formulation of FM often exhibits appealing structure--sometimes even admitting gradient-field velocities--practical models are trained on finite datasets and therefore optimize empirical objectives. This empirical setting substantially alters the geometry of the learned velocity field and the energetic properties of the resulting sampler. These notes aim to clarify how empirical FM behaves, how it differs from its population counterpart, and what implicit biases arise in the learned sampling dynamics. From now on, we assume that all the probability distributions/measures (except the empirical distribution) of the random variables considered are absolutely continuous (i.e., they have densities with respect to the Lebesgue measure), in which case we shall abuse the notation and use the same symbol to denote both the distribution and the density. To maintain the flow of the main text, we defer discussion of related work and all proofs of the theoretical results to the appendix.


2025 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

AIHub

Authors pictured in order of their interview publication date (left to right, top to bottom). Each year, a small group of PhD students are chosen to participate in the AAAI/SIGAI Doctoral Consortium . This initiative provides an opportunity for the students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. During 2025, we met with some of the students to find out more about their research and the doctoral consortium experience. Kunpeng Xu completed his PhD at the Université de Sherbrooke and is now a postdoctoral fellow at McGill University.