hardware impairment
Model-based learning for joint channel estimationand hybrid MIMO precoding
Klaimi, Nay, Bedoui, Amira, Elvira, Clément, Mary, Philippe, Magoarou, Luc Le
Hybrid precoding is a key ingredient of cost-effective massive multiple-input multiple-output transceivers. However, setting jointly digital and analog precoders to optimally serve multiple users is a difficult optimization problem. Moreover, it relies heavily on precise knowledge of the channels, which is difficult to obtain, especially when considering realistic systems comprising hardware impairments. In this paper, a joint channel estimation and hybrid precoding method is proposed, which consists in an end-to-end architecture taking received pilots as inputs and outputting pre-coders. The resulting neural network is fully model-based, making it lightweight and interpretable with very few learnable parameters. The channel estimation step is performed using the unfolded matching pursuit algorithm, accounting for imperfect knowledge of the antenna system, while the precoding step is done via unfolded projected gradient ascent. The great potential of the proposed method is empirically demonstrated on realistic synthetic channels.
Model-driven deep neural network for enhanced direction finding with commodity 5G gNodeB
Liu, Shengheng, Mao, Zihuan, Li, Xingkang, Pan, Mengguan, Liu, Peng, Huang, Yongming, You, Xiaohu
Pervasive and high-accuracy positioning has become increasingly important as a fundamental enabler for intelligent connected devices in mobile networks. Nevertheless, current wireless networks heavily rely on pure model-driven techniques to achieve positioning functionality, often succumbing to performance deterioration due to hardware impairments in practical scenarios. Here we reformulate the direction finding or angle-of-arrival (AoA) estimation problem as an image recovery task of the spatial spectrum and propose a new model-driven deep neural network (MoD-DNN) framework. The proposed MoD-DNN scheme comprises three modules: a multi-task autoencoder-based beamformer, a coarray spectrum generation module, and a model-driven deep learning-based spatial spectrum reconstruction module. Our technique enables automatic calibration of angular-dependent phase error thereby enhancing the resilience of direction-finding precision against realistic system non-idealities. We validate the proposed scheme both using numerical simulations and field tests. The results show that the proposed MoD-DNN framework enables effective spectrum calibration and accurate AoA estimation. To the best of our knowledge, this study marks the first successful demonstration of hybrid data-and-model-driven direction finding utilizing readily available commodity 5G gNodeB.
- Asia > China > Jiangsu Province > Nanjing (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Vietnam > Thái Nguyên Province > Thái Nguyên (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
- Information Technology (0.93)
- Telecommunications (0.68)
Model-Driven Deep Neural Network for Enhanced AoA Estimation Using 5G gNB
Liu, Shengheng, Li, Xingkang, Mao, Zihuan, Liu, Peng, Huang, Yongming
High-accuracy positioning has become a fundamental enabler for intelligent connected devices. Nevertheless, the present wireless networks still rely on model-driven approaches to achieve positioning functionality, which are susceptible to performance degradation in practical scenarios, primarily due to hardware impairments. Integrating artificial intelligence into the positioning framework presents a promising solution to revolutionize the accuracy and robustness of location-based services. In this study, we address this challenge by reformulating the problem of angle-of-arrival (AoA) estimation into image reconstruction of spatial spectrum. To this end, we design a model-driven deep neural network (MoD-DNN), which can automatically calibrate the angular-dependent phase error. The proposed MoD-DNN approach employs an iterative optimization scheme between a convolutional neural network and a sparse conjugate gradient algorithm. Simulation and experimental results are presented to demonstrate the effectiveness of the proposed method in enhancing spectrum calibration and AoA estimation.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- (7 more...)
- Research Report > New Finding (0.48)
- Research Report > Promising Solution (0.34)
Physically Parameterized Differentiable MUSIC for DoA Estimation with Uncalibrated Arrays
Chatelier, Baptiste, Mateos-Ramos, José Miguel, Corlay, Vincent, Häger, Christian, Crussière, Matthieu, Wymeersch, Henk, Magoarou, Luc Le
Direction of arrival (DoA) estimation is a common sensing problem in radar, sonar, audio, and wireless communication systems. It has gained renewed importance with the advent of the integrated sensing and communication paradigm. To fully exploit the potential of such sensing systems, it is crucial to take into account potential hardware impairments that can negatively impact the obtained performance. This study introduces a joint DoA estimation and hardware impairment learning scheme following a model-based approach. Specifically, a differentiable version of the multiple signal classification (MUSIC) algorithm is derived, allowing efficient learning of the considered impairments. The proposed approach supports both supervised and unsupervised learning strategies, showcasing its practical potential. Simulation results indicate that the proposed method successfully learns significant inaccuracies in both antenna locations and complex gains. Additionally, the proposed method outperforms the classical MUSIC algorithm in the DoA estimation task.
- Media > Music (0.49)
- Leisure & Entertainment (0.49)
Deep Reinforcement Learning Based Joint Downlink Beamforming and RIS Configuration in RIS-aided MU-MISO Systems Under Hardware Impairments and Imperfect CSI
Saglam, Baturay, Gurgunoglu, Doga, Kozat, Suleyman S.
We introduce a novel deep reinforcement learning (DRL) approach to jointly optimize transmit beamforming and reconfigurable intelligent surface (RIS) phase shifts in a multiuser multiple input single output (MU-MISO) system to maximize the sum downlink rate under the phase-dependent reflection amplitude model. Our approach addresses the challenge of imperfect channel state information (CSI) and hardware impairments by considering a practical RIS amplitude model. We compare the performance of our approach against a vanilla DRL agent in two scenarios: perfect CSI and phase-dependent RIS amplitudes, and mismatched CSI and ideal RIS reflections. The results demonstrate that the proposed framework significantly outperforms the vanilla DRL agent under mismatch and approaches the golden standard. Our contributions include modifications to the DRL approach to address the joint design of transmit beamforming and phase shifts and the phase-dependent amplitude model. To the best of our knowledge, our method is the first DRL-based approach for the phase-dependent reflection amplitude model in RIS-aided MU-MISO systems. Our findings in this study highlight the potential of our approach as a promising solution to overcome hardware impairments in RIS-aided wireless communication systems.
- North America > United States (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Europe > Italy > Sardinia (0.04)
- (2 more...)
Antenna Array Calibration Via Gaussian Process Models
Tambovskiy, Sergey S., Fodor, Gábor, Tullberg, Hugo M.
Antenna array calibration is necessary to maintain the high fidelity of beam patterns across a wide range of advanced antenna systems and to ensure channel reciprocity in time division duplexing schemes. Despite the continuous development in this area, most existing solutions are optimised for specific radio architectures, require standardised over-the-air data transmission, or serve as extensions of conventional methods. The diversity of communication protocols and hardware creates a problematic case, since this diversity requires to design or update the calibration procedures for each new advanced antenna system. In this study, we formulate antenna calibration in an alternative way, namely as a task of functional approximation, and address it via Bayesian machine learning. Our contributions are three-fold. Firstly, we define a parameter space, based on near-field measurements, that captures the underlying hardware impairments corresponding to each radiating element, their positional offsets, as well as the mutual coupling effects between antenna elements. Secondly, Gaussian process regression is used to form models from a sparse set of the aforementioned near-field data. Once deployed, the learned non-parametric models effectively serve to continuously transform the beamforming weights of the system, resulting in corrected beam patterns. Lastly, we demonstrate the viability of the described methodology for both digital and analog beamforming antenna arrays of different scales and discuss its further extension to support real-time operation with dynamic hardware impairments.
- Europe > Sweden > Stockholm > Stockholm (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Uncovering the Portability Limitation of Deep Learning-Based Wireless Device Fingerprints
Hamdaoui, Bechir, Elmaghbub, Abdurrahman
Abstract--Recent device fingerprinting approaches rely on deep learning to extract device-specific features solely from raw RF signals to identify, classify and authenticate wireless devices. One widely known issue lies in the inability of these approaches to maintain good performances when the training data and testing data are collected under varying deployment domains. The same also happens when considering other varying domains, like channel condition and protocol configuration. We will next demonstrate how the limited portability of fingerprints can impact device fingerprinting I. Recently, there has been considerable interest in adopting deep A. Testbed and Data Collection Setup learning-enabled device fingerprinting in automated network To explain these challenges, we used our IoT fingerprinting authentication mechanisms for emerging large-scale wireless testbed [7] to run several experiments under different devices (e.g., 6G, IoT, vehicular, etc.) [1], [2]. In essence, varied domains, by training and testing the deep learning device fingerprinting relies on deep learning techniques to models on data collected on different days, using different extract device-specific features and signatures, solely from raw receivers, and/or under different protocol configurations.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.15)
- North America > United States > Oregon > Benton County > Corvallis (0.05)
Deep-Learning-Based Device Fingerprinting for Increased LoRa-IoT Security: Sensitivity to Network Deployment Changes
Hamdaoui, Bechir, Elmaghbub, Abdurrahman
Deep-learning-based device fingerprinting has recently been recognized as a key enabler for automated network access authentication. Its robustness to impersonation attacks due to the inherent difficulty of replicating physical features is what distinguishes it from conventional cryptographic solutions. Although device fingerprinting has shown promising performances, its sensitivity to changes in the network operating environment still poses a major limitation. This paper presents an experimental framework that aims to study and overcome the sensitivity of LoRa-enabled device fingerprinting to such changes. We first begin by describing RF datasets we collected using our LoRa-enabled wireless device testbed. We then propose a new fingerprinting technique that exploits out-of-band distortion information caused by hardware impairments to increase the fingerprinting accuracy. Finally, we experimentally study and analyze the sensitivity of LoRa RF fingerprinting to various network setting changes. Our results show that fingerprinting does relatively well when the learning models are trained and tested under the same settings. However, when trained and tested under different settings, these models exhibit moderate sensitivity to channel condition changes and severe sensitivity to protocol configuration and receiver hardware changes when IQ data is used as input. However, with FFT data is used as input, they perform poorly under any change.
Channel Estimation under Hardware Impairments: Bayesian Methods versus Deep Learning
Demir, Özlem Tugfe, Björnson, Emil
This paper considers the impact of general hardware impairments in a multiple-antenna base station and user equipments on the uplink performance. First, the effective channels are analytically derived for distortion-aware receivers when using finite-sized signal constellations. Next, a deep feedforward neural network is designed and trained to estimate the effective channels. Its performance is compared with state-of-the-art distortion-aware and unaware Bayesian linear minimum mean-squared error (LMMSE) estimators. The proposed deep learning approach improves the estimation quality by exploiting impairment characteristics, while LMMSE methods treat distortion as noise.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Sweden > Östergötland County > Linköping (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
Making Intelligent Reflecting Surfaces More Intelligent: A Roadmap Through Reservoir Computing
Zhou, Zhou, Bai, Kangjun, Mohammadi, Nima, Yi, Yang, Liu, Lingjia
This article introduces a neural network-based signal processing framework for intelligent reflecting surface (IRS) aided wireless communications systems. By modeling radio-frequency (RF) impairments inside the "meta-atoms" of IRS (including nonlinearity and memory effects), we present an approach that generalizes the entire IRS-aided system as a reservoir computing (RC) system, an efficient recurrent neural network (RNN) operating in a state near the "edge of chaos". This framework enables us to take advantage of the nonlinearity of this "fabricated" wireless environment to overcome link degradation due to model mismatch. Accordingly, the randomness of the wireless channel and RF imperfections are naturally embedded into the RC framework, enabling the internal RC dynamics lying on the edge of chaos. Furthermore, several practical issues, such as channel state information acquisition, passive beamforming design, and physical layer reference signal design, are discussed.