Kernel Recursive Least Squares Dictionary Learning Algorithm

Alipoor, Ghasem, Skretting, Karl

arXiv.org Artificial Intelligence 

Data factorization methods have met with considerable success in discovering latent features of the signals encountered in wide-ranging applications. In this way, the representation bases, which make up the columns of the basis matrix or dictionary, are learned from the available samples of the target environment. An example is the sparse representation (SR) in which the dictionary is intended to best represent the data with a small number of atoms, much smaller than the dimension of the signal space. It has been shown that, in addition to a more informative representation of signals, imposing sparsity constraints on the representation coefficients can improve the generalization performance and the computational efficiency [1, 2, 3]. Furthermore, the sparse representation is more robust to noise, redundancy, and missing data. These features are mainly attributed to the fact that the intrinsic dimension of natural signals is usually much smaller than their apparent dimension and hence SR in an appropriate dictionary can extract these intrinsic features more efficiently. SR has been a successful strategy and has received considerable attention and achieved state-of-the-art results in many applications, e.g.