Iterate to Accelerate: A Unified Framework for Iterative Reasoning and Feedback Convergence

Fein-Ashley, Jacob

arXiv.org Artificial Intelligence 

Iterative methods lie at the heart of numerous optimization and reasoning algorithms, ranging from classical mirror descent and dynamic programming to modern deep learning architectures that exhibit chain-of-thought reasoning. Traditional acceleration techniques, such as Nesterov's momentum, have shown that carefully designed iterative schemes can significantly improve convergence rates in convex settings. However, many practical applications operate in non-Euclidean spaces and are subject to state-dependent perturbations or even adversarial disturbances, motivating the need for a more general analysis. In this work, we develop a comprehensive framework that unifies a wide class of iterative reasoning processes using the language of Bregman divergences.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found