Error Bounds for Compressed Sensing Algorithms With Group Sparsity: A Unified Approach

arXiv.org Machine Learning

In compressed sensing, in order to recover a sparse or nearly sparse vector from possibly noisy measurements, the most popular approach is $\ell_1$-norm minimization. Upper bounds for the $\ell_2$- norm of the error between the true and estimated vectors are given in [1] and reviewed in [2], while bounds for the $\ell_1$-norm are given in [3]. When the unknown vector is not conventionally sparse but is "group sparse" instead, a variety of alternatives to the $\ell_1$-norm have been proposed in the literature, including the group LASSO, sparse group LASSO, and group LASSO with tree structured overlapping groups. However, no error bounds are available for any of these modified objective functions. In the present paper, a unified approach is presented for deriving upper bounds on the error between the true vector and its approximation, based on the notion of decomposable and $\gamma$-decomposable norms. The bounds presented cover all of the norms mentioned above, and also provide a guideline for choosing norms in future to accommodate alternate forms of sparsity.


Compressed Sensing for Block-Sparse Smooth Signals

arXiv.org Machine Learning

We present reconstruction algorithms for smooth signals with block sparsity from their compressed measurements. We tackle the issue of varying group size via group-sparse least absolute shrinkage selection operator (LASSO) as well as via latent group LASSO regularizations. We achieve smoothness in the signal via fusion. We develop low-complexity solvers for our proposed formulations through the alternating direction method of multipliers.


Support union recovery in high-dimensional multivariate regression

arXiv.org Machine Learning

In multivariate regression, a $K$-dimensional response vector is regressed upon a common set of $p$ covariates, with a matrix $B^*\in\mathbb{R}^{p\times K}$ of regression coefficients. We study the behavior of the multivariate group Lasso, in which block regularization based on the $\ell_1/\ell_2$ norm is used for support union recovery, or recovery of the set of $s$ rows for which $B^*$ is nonzero. Under high-dimensional scaling, we show that the multivariate group Lasso exhibits a threshold for the recovery of the exact row pattern with high probability over the random design and noise that is specified by the sample complexity parameter $\theta(n,p,s):=n/[2\psi(B^*)\log(p-s)]$. Here $n$ is the sample size, and $\psi(B^*)$ is a sparsity-overlap function measuring a combination of the sparsities and overlaps of the $K$-regression coefficient vectors that constitute the model. We prove that the multivariate group Lasso succeeds for problem sequences $(n,p,s)$ such that $\theta(n,p,s)$ exceeds a critical level $\theta_u$, and fails for sequences such that $\theta(n,p,s)$ lies below a critical level $\theta_{\ell}$. For the special case of the standard Gaussian ensemble, we show that $\theta_{\ell}=\theta_u$ so that the characterization is sharp. The sparsity-overlap function $\psi(B^*)$ reveals that, if the design is uncorrelated on the active rows, $\ell_1/\ell_2$ regularization for multivariate regression never harms performance relative to an ordinary Lasso approach and can yield substantial improvements in sample complexity (up to a factor of $K$) when the coefficient vectors are suitably orthogonal. For more general designs, it is possible for the ordinary Lasso to outperform the multivariate group Lasso. We complement our analysis with simulations that demonstrate the sharpness of our theoretical results, even for relatively small problems.


Convex Approaches to Model Wavelet Sparsity Patterns

arXiv.org Machine Learning

Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees(HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing of the simple Markovian dependency structure. This leads to reconstruction problems that are non-convex optimizations. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods. In this paper, we propose new modeling approaches based on group-sparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. We show that the methods we develop perform significantly better in deconvolution and compressed sensing applications, while being as computationally efficient as standard coefficient-wise approaches such as lasso.


Sufficient Conditions for Generating Group Level Sparsity in a Robust Minimax Framework

Neural Information Processing Systems

Regularization technique has become a principled tool for statistics and machine learning research and practice. However, in most situations, these regularization terms are not well interpreted, especially on how they are related to the loss function anddata. In this paper, we propose a robust minimax framework to interpret the relationship between data and regularization terms for a large class of loss functions. We show that various regularization terms are essentially corresponding todifferent distortions to the original data matrix. This minimax framework includes ridge regression, lasso, elastic net, fused lasso, group lasso, local coordinate coding,multiple kernel learning, etc., as special cases. Within this minimax framework, we further give mathematically exact definition for a novel representation calledsparse grouping representation (SGR), and prove a set of sufficient conditions for generating such group level sparsity. Under these sufficient conditions, alarge set of consistent regularization terms can be designed. This SGR is essentially different from group lasso in the way of using class or group information, andit outperforms group lasso when there appears group label noise. We also provide some generalization bounds in a classification setting.