Goto

Collaborating Authors

 logit


Stability of Sequential and Parallel Coordinate Ascent Variational Inference

Pati, Debdeep

arXiv.org Machine Learning

We highlight a striking difference in behavior between two widely used variants of coordinate ascent variational inference: the sequential and parallel algorithms. While such differences were known in the numerical analysis literature in simpler settings, they remain largely unexplored in the optimization-focused literature on variational inference in more complex models. Focusing on the moderately high-dimensional linear regression problem, we show that the sequential algorithm, although typically slower, enjoys convergence guarantees under more relaxed conditions than the parallel variant, which is often employed to facilitate block-wise updates and improve computational efficiency.








Discovering Preference Optimization Algorithms with and for Large Language Models Chris Lu

Neural Information Processing Systems

Typically, preference optimization is approached as an offline supervised learning task using manually crafted convex loss functions. While these methods are based on theoretical insights, they are inherently constrained by human creativity, so the large search space of possible loss functions remains under-explored.



We present conditional monotonicity results using alternative estimators of performance quality

Neural Information Processing Systems

The Appendix is structured as follows: We provide a proof of conditional guarantees in EENNs for (hard) PoE in Appendix A . We conduct an ablation study for our P A model in Appendix B.2 . We report results of NLP experiments in Appendix B.4 . We discuss anytime regression and deep ensembles in Appendix B.6 . We propose a technique for controlling the violations of conditional monotonicity in P A in Appendix B.8 .