Goto

Collaborating Authors

 McLaughlin, Dennis


Ensemble transport smoothing. Part II: Nonlinear updates

arXiv.org Machine Learning

Sequential Monte Carlo methods can characterize arbitrary distributions using sequential importance sampling and resampling, but typically require very large sample sizes to mitigate weight collapse [Snyder et al., 2008, 2015]. By contrast, ensemble Kalman-type methods avoid the use of weights, but are based on affine prior-to-posterior updates that are consistent only if all distributions involved are Gaussian. In the context of smoothing, such methods include the ensemble Kalman smoother (EnKS) [Evensen and Van Leeuwen, 2000], which has inspired numerous algorithmic variations such as the ensemble smoother with multiple data assimilation [Emerick and Reynolds, 2013] and the iterative ensemble Kalman smoother (iEnKS) [Bocquet and Sakov, 2014, Evensen et al., 2019], as well as backwards smoothers such as the ensemble Rauch-Tung-Striebel smoother (EnRTSS) [Raanes, 2016]. These two classes of methods occupy opposite ends of a spectrum that ranges from an emphasis on statistical generality at one end to an emphasis on computational efficiency at the other. This trade-off complicates design decisions for smoothing problems that are at once non-Gaussian and computationally expensive.


Ensemble transport smoothing. Part I: Unified framework

arXiv.org Machine Learning

Smoothers are algorithms for Bayesian time series re-analysis. Most operational smoothers rely either on affine Kalman-type transformations or on sequential importance sampling. These strategies occupy opposite ends of a spectrum that trades computational efficiency and scalability for statistical generality and consistency: non-Gaussianity renders affine Kalman updates inconsistent with the true Bayesian solution, while the ensemble size required for successful importance sampling can be prohibitive. This paper revisits the smoothing problem from the perspective of measure transport, which offers the prospect of consistent prior-to-posterior transformations for Bayesian inference. We leverage this capacity by proposing a general ensemble framework for transport-based smoothing. Within this framework, we derive a comprehensive set of smoothing recursions based on nonlinear transport maps and detail how they exploit the structure of state-space models in fully non-Gaussian settings. We also describe how many standard Kalman-type smoothing algorithms emerge as special cases of our framework. A companion paper (Ramgraber et al., 2023) explores the implementation of nonlinear ensemble transport smoothers in greater depth.