Ensemble transport smoothing. Part II: Nonlinear updates

Ramgraber, Maximilian, Baptista, Ricardo, McLaughlin, Dennis, Marzouk, Youssef

arXiv.org Machine Learning 

Sequential Monte Carlo methods can characterize arbitrary distributions using sequential importance sampling and resampling, but typically require very large sample sizes to mitigate weight collapse [Snyder et al., 2008, 2015]. By contrast, ensemble Kalman-type methods avoid the use of weights, but are based on affine prior-to-posterior updates that are consistent only if all distributions involved are Gaussian. In the context of smoothing, such methods include the ensemble Kalman smoother (EnKS) [Evensen and Van Leeuwen, 2000], which has inspired numerous algorithmic variations such as the ensemble smoother with multiple data assimilation [Emerick and Reynolds, 2013] and the iterative ensemble Kalman smoother (iEnKS) [Bocquet and Sakov, 2014, Evensen et al., 2019], as well as backwards smoothers such as the ensemble Rauch-Tung-Striebel smoother (EnRTSS) [Raanes, 2016]. These two classes of methods occupy opposite ends of a spectrum that ranges from an emphasis on statistical generality at one end to an emphasis on computational efficiency at the other. This trade-off complicates design decisions for smoothing problems that are at once non-Gaussian and computationally expensive.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found