Maximum Likelihood With a Time Varying Parameter

Lanconelli, Alberto, Lauria, Christopher S. A.

arXiv.org Machine Learning 

When estimating unknown parameters in a dynamic model the optimum solution to the parameter estimation problem may not remain constant. Specifically, the optimal values of the model parameters may change through time because of the evolution of the underlying process: finding them is, in general, not straightforward. A survey of basic techniques for tracking the time-varying dynamics of a system is provided in [Ljung and Gunnarsson, 1990] where recursive algorithms in non-stationary stochastic optimization are analysed under different assumptions about the true system's variations, see also [Simonetto et al., 2020] for a review in a purely deterministic setting. In [Delyon and Juditsky, 1995] the problem of tracking the random drifting parameters of a linear regression system is tackled, and [Zhu and Spall, 2016] builds a computable tracking error bound for how a stochastic approximation with constant gain keeps up with a non-stationary target. Successively, [Wilson et al., 2019] introduces a framework for sequentially solving convex stochastic minimization problems, where the distance between successive minimizers is bounded. The minimization problems are then solved by sequentially applying an optimization algorithm, such as stochastic gradient descent (SGD). In a similar setting, [Cao et al., 2019] establishes an upper bound on the regret of a projected SGD algorithm with respect to the drift of the dynamic optima, while [Cutler et al., 2021] provides novel non-asymptotic convergence guarantees for stochastic algorithms with iterate averaging.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found