Debbah, Mérouane
Deep Learning for UL/DL Channel Calibration in Generic Massive MIMO Systems
Huang, Chongwen, Alexandropoulos, George C., Zappone, Alessio, Yuen, Chau, Debbah, Mérouane
One of the fundamental challenges to realize massive Multiple-Input Multiple-Output (MIMO) communications is the accurate acquisition of channel state information for a plurality of users at the base station. This is usually accomplished in the UpLink (UL) direction profiting from the time division duplexing mode. In practical base station transceivers, there exist inevitably nonlinear hardware components, like signal amplifiers and various analog filters, which complicates the calibration task. To deal with this challenge, we design a deep neural network for channel calibration between the UL and DownLink (DL) directions. During the initial training phase, the deep neural network is trained from both UL and DL channel measurements. We then leverage the trained deep neural network with the instantaneously estimated UL channel to calibrate the DL one, which is not observable during the UL transmission phase. Our numerical results confirm the merits of the proposed approach, and show that it can achieve performance comparable to conventional approaches, like the Agros method and methods based on least squares, that however assume linear hardware behavior models. More importantly, considering generic nonlinear relationships between the UL and DL channels, it is demonstrated that our deep neural network approach exhibits robust performance, even when the number of training sequences is limited.
Distributed Power Allocation with SINR Constraints Using Trial and Error Learning
Rose, Luca, Perlaza, Samir M., Debbah, Mérouane, Martret, Christophe J. Le
In this paper, we address the problem of global transmit power minimization in a self-congiguring network where radio devices are subject to operate at a minimum signal to interference plus noise ratio (SINR) level. We model the network as a parallel Gaussian interference channel and we introduce a fully decentralized algorithm (based on trial and error) able to statistically achieve a congiguration where the performance demands are met. Contrary to existing solutions, our algorithm requires only local information and can learn stable and efficient working points by using only one bit feedback. We model the network under two different game theoretical frameworks: normal form and satisfaction form. We show that the converging points correspond to equilibrium points, namely Nash and satisfaction equilibrium. Similarly, we provide sufficient conditions for the algorithm to converge in both formulations. Moreover, we provide analytical results to estimate the algorithm's performance, as a function of the network parameters. Finally, numerical results are provided to validate our theoretical conclusions. Keywords: Learning, power control, trial and error, Nash equilibrium, spectrum sharing.