Goto

Collaborating Authors

 convex body







Sampling from Log-Concave Distributions with Infinity-Distance Guarantees Oren Mangoubi Worcester Polytechnic Institute Nisheeth K. Vishnoi Y ale University

Neural Information Processing Systems

This approach also allows us to obtain an improvement on the dimension d in the running time for the problem of sampling from a log-concave distribution on polytopes K with infinity distance Á, by plugging in TV -distance running time bounds for the Dikin Walk Markov chain.


Self-Concordant Perturbations for Linear Bandits

Lévy, Lucas, Valeau, Jean-Lou, Akhavan, Arya, Rebeschini, Patrick

arXiv.org Machine Learning

We study the adversarial linear bandits problem and present a unified algorithmic framework that bridges Follow-the-Regularized-Leader (FTRL) and Follow-the-Perturbed-Leader (FTPL) methods, extending the known connection between them from the full-information setting. Within this framework, we introduce self-concordant perturbations, a family of probability distributions that mirror the role of self-concordant barriers previously employed in the FTRL-based SCRiBLe algorithm. Using this idea, we design a novel FTPL-based algorithm that combines self-concordant regularization with efficient stochastic exploration. Our approach achieves a regret of $O(d\sqrt{n \ln n})$ on both the $d$-dimensional hypercube and the Euclidean ball. On the Euclidean ball, this matches the rate attained by existing self-concordant FTRL methods. For the hypercube, this represents a $\sqrt{d}$ improvement over these methods and matches the optimal bound up to logarithmic factors.