Plotting

 Caire, Giuseppe


Learned Block Iterative Shrinkage Thresholding Algorithm for Photothermal Super Resolution Imaging

arXiv.org Artificial Intelligence

Block-sparse regularization is already well-known in active thermal imaging and is used for multiple measurement based inverse problems. The main bottleneck of this method is the choice of regularization parameters which differs for each experiment. To avoid time-consuming manually selected regularization parameter, we propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network. More precisely, we show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters. In addition, this algorithm enables the determination of a suitable weight matrix to solve the underlying inverse problem. Therefore, in this paper we present the algorithm and compare it with state of the art block iterative shrinkage thresholding using synthetically generated test data and experimental test data from active thermography for defect reconstruction. Our results show that the use of the learned block-sparse optimization approach provides smaller normalized mean square errors for a small fixed number of iterations than without learning. Thus, this new approach allows to improve the convergence speed and only needs a few iterations to generate accurate defect reconstruction in photothermal super resolution imaging.


Plug-And-Play Learned Gaussian-mixture Approximate Message Passing

arXiv.org Machine Learning

Deep unfolding showed to be a very successful approach for accelerating and tuning classical signal processing algorithms. In this paper, we propose learned Gaussian-mixture AMP (L-GM-AMP) - a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior. Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm. The robust and flexible denoiser is a byproduct of modelling source prior with a Gaussian-mixture (GM), which can well approximate continuous, discrete, as well as mixture distributions. Its parameters are learned using standard backpropagation algorithm. To demonstrate robustness of the proposed algorithm, we conduct Monte-Carlo (MC) simulations for both mixture and discrete distributions. Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.


Real-time Localization Using Radio Maps

arXiv.org Machine Learning

Global Navigation Satellite System typically performs poorly in urban environments when there is no line-of-sight between the devices and the satellites, and thus alternative localization methods are often required. We present a simple yet effective method for localization based on pathloss. In our approach, the user to be localized reports the received signal strength from a set of base stations with known locations. For each base station we have a good approximation of the pathloss at each location in the map, provided by RadioUNet, an efficient deep learning-based simulator of pathloss functions in urban environment, akin to ray-tracing. Using the approximations of the pathloss functions of all base stations and the reported signal strengths, we are able to extract a very accurate approximation of the location of the user.


Multiple Measurement Vectors Problem: A Decoupling Property and its Applications

arXiv.org Machine Learning

Efficient and reliable estimation in many signal processing problems encountered in applications requires adopting sparsity prior in a suitable basis on the signals and using techniques from compressed sensing (CS). In this paper, we study a CS problem known as Multiple Measurement Vectors (MMV) problem, which arises in joint estimation of multiple signal realizations when the signal samples have a common (joint) support over a fixed known dictionary. Although there is a vast literature on the analysis of MMV, it is not yet fully known how the number of signal samples and their statistical correlations affects the performance of the joint estimation in MMV. Moreover, in many instances of MMV the underlying sparsifying dictionary may not be precisely known, and it is still an open problem to quantify how the dictionary mismatch may affect the estimation performance. In this paper, we focus on $\ell_{2,1}$-norm regularized least squares ($\ell_{2,1}$-LS) as a well-known and widely-used MMV algorithm in the literature. We prove an interesting decoupling property for $\ell_{2,1}$-LS, where we show that it can be decomposed into two phases: i) use all the signal samples to estimate the signal covariance matrix (coupled phase), ii) plug in the resulting covariance estimate as the true covariance matrix into the Minimum Mean Squared Error (MMSE) estimator to reconstruct each signal sample individually (decoupled phase). As a consequence of this decomposition, we are able to provide further insights on the performance of $\ell_{2,1}$-LS for MMV. In particular, we address how the signal correlations and dictionary mismatch affects its estimation performance. We also provide numerical simulations to validate our theoretical results.


Multi-Band Covariance Interpolation with Applications in Massive MIMO

arXiv.org Machine Learning

In this paper, we study the problem of multi-band (frequency-variant) covariance interpolation with a particular emphasis towards massive MIMO applications. In a massive MIMO system, the communication between each BS with $M \gg 1$ antennas and each single-antenna user occurs through a collection of scatterers in the environment, where the channel vector of each user at BS antennas consists in a weighted linear combination of the array responses of the scatterers, where each scatterer has its own angle of arrival (AoA) and complex channel gain. The array response at a given AoA depends on the wavelength of the incoming planar wave and is naturally frequency dependent. This results in a frequency-dependent distortion where the second order statistics, i.e., the covariance matrix, of the channel vectors varies with frequency. In this paper, we show that although this effect is generally negligible for a small number of antennas $M$, it results in a considerable distortion of the covariance matrix and especially its dominant signal subspace in the massive MIMO regime where $M \to \infty$, and can generally incur a serious degradation of the performance especially in frequency division duplexing (FDD) massive MIMO systems where the uplink (UL) and the downlink (DL) communication occur over different frequency bands. We propose a novel UL-DL covariance interpolation technique that is able to recover the covariance matrix in the DL from an estimate of the covariance matrix in the UL under a mild reciprocity condition on the angular power spread function (PSF) of the users. We analyze the performance of our proposed scheme mathematically and prove its robustness under a sufficiently large spatial oversampling of the array. We also propose several simple off-the-shelf algorithms for UL-DL covariance interpolation and evaluate their performance via numerical simulations.


Signal Recovery from Unlabeled Samples

arXiv.org Machine Learning

In this paper, we study the recovery of a signal from a set of noisy linear projections (measurements), when such projections are unlabeled, that is, the correspondence between the measurements and the set of projection vectors (i.e., the rows of the measurement matrix) is not known a priori. We consider a special case of unlabeled sensing referred to as Unlabeled Ordered Sampling (UOS) where the ordering of the measurements is preserved. We identify a natural duality between this problem and classical Compressed Sensing (CS), where we show that the unknown support (location of nonzero elements) of a sparse signal in CS corresponds to the unknown indices of the measurements in UOS. While in CS it is possible to recover a sparse signal from an under-determined set of linear equations (less equations than the signal dimension), successful recovery in UOS requires taking more samples than the dimension of the signal. Motivated by this duality, we develop a Restricted Isometry Property (RIP) similar to that in CS. We also design a low-complexity Alternating Minimization algorithm that achieves a stable signal recovery under the established RIP. We analyze our proposed algorithm for different signal dimensions and number of measurements theoretically and investigate its performance empirically via simulations. The results are reminiscent of phase-transition similar to that occurring in CS.


Compressive Estimation of a Stochastic Process with Unknown Autocorrelation Function

arXiv.org Machine Learning

In this paper, we study the prediction of a circularly symmetric zero-mean stationary Gaussian process from a window of observations consisting of finitely many samples. This is a prevalent problem in a wide range of applications in communication theory and signal processing. Due to stationarity, when the autocorrelation function or equivalently the power spectral density (PSD) of the process is available, the Minimum Mean Squared Error (MMSE) predictor is readily obtained. In particular, it is given by a linear operator that depends on autocorrelation of the process as well as the noise power in the observed samples. The prediction becomes, however, quite challenging when the PSD of the process is unknown. In this paper, we propose a blind predictor that does not require the a priori knowledge of the PSD of the process and compare its performance with that of an MMSE predictor that has a full knowledge of the PSD. To design such a blind predictor, we use the random spectral representation of a stationary Gaussian process. We apply the well-known atomic-norm minimization technique to the observed samples to obtain a discrete quantization of the underlying random spectrum, which we use to predict the process. Our simulation results show that this estimator has a good performance comparable with that of the MMSE estimator.


Channel Vector Subspace Estimation from Low-Dimensional Projections

arXiv.org Machine Learning

Massive MIMO is a variant of multiuser MIMO where the number of base-station antennas $M$ is very large (typically 100), and generally much larger than the number of spatially multiplexed data streams (typically 10). Unfortunately, the front-end A/D conversion necessary to drive hundreds of antennas, with a signal bandwidth of the order of 10 to 100 MHz, requires very large sampling bit-rate and power consumption. In order to reduce such implementation requirements, Hybrid Digital-Analog architectures have been proposed. In particular, our work in this paper is motivated by one of such schemes named Joint Spatial Division and Multiplexing (JSDM), where the downlink precoder (resp., uplink linear receiver) is split into the product of a baseband linear projection (digital) and an RF reconfigurable beamforming network (analog), such that only a reduced number $m \ll M$ of A/D converters and RF modulation/demodulation chains is needed. In JSDM, users are grouped according to the similarity of their channel dominant subspaces, and these groups are separated by the analog beamforming stage, where the multiplexing gain in each group is achieved using the digital precoder. Therefore, it is apparent that extracting the channel subspace information of the $M$-dim channel vectors from snapshots of $m$-dim projections, with $m \ll M$, plays a fundamental role in JSDM implementation. In this paper, we develop novel efficient algorithms that require sampling only $m = O(2\sqrt{M})$ specific array elements according to a coprime sampling scheme, and for a given $p \ll M$, return a $p$-dim beamformer that has a performance comparable with the best p-dim beamformer that can be designed from the full knowledge of the exact channel covariance matrix. We assess the performance of our proposed estimators both analytically and empirically via numerical simulations.