Robust Hierarchical-Optimization RLS Against Sparse Outliers
Slavakis, Konstantinos, Banerjee, Sinjini
Robust Hierarchical-Optimization RLS Against Sparse Outliers Konstantinos Slavakis and Sinjini Banerjee Abstract This paper fortifies the recently introduced hierarchical-optimization recursive least squares (HO-RLS) against outliers which contaminate infrequently linear-regression models. Outliers are modeled as nuisance variables and are estimated together with the linear filter/system variables via a sparsity-inducing (non-)convexly regularized least-squares task. The proposed outlier-robust HO-RLS builds on steepest-descent directions with a constant step size (learning rate), needs no matrix inversion (lemma), accommodates colored nominal noise of known correlation matrix, exhibits small computational footprint, and offers theoretical guarantees, in a probabilistic sense, for the convergence of the system estimates to the solutions of a hierarchical-optimization problem: Minimize a convex loss, which models a-priori knowledge about the unknown system, over the minimizers of the classical ensemble LS loss. Extensive numerical tests on synthetically generated data in both stationary and non-stationary scenarios showcase notable improvements of the proposed scheme over state-of-the-art techniques. 1 Introduction The recursive least squares (RLS) has been a pivotal method in solving LS problems in adaptive filtering and system identification [1], with a reach that extends also into contemporary learning tasks, such as solving large-scale LS problems in online learning, e.g., [2]. Nevertheless, the performance of RLS (LS estimators in general) deteriorates in the presence of outliers, i.e., data or noise not adhering to a nominal data-generation model [3].
Oct-11-2019
- Country:
- North America > United States > New York > Erie County > Buffalo (0.14)
- Genre:
- Research Report > Promising Solution (0.34)
- Technology: