Goto

Collaborating Authors

 Dahrouj, Hayssam


Personalized Federated Learning for Cellular VR: Online Learning and Dynamic Caching

arXiv.org Artificial Intelligence

Delivering an immersive experience to virtual reality (VR) users through wireless connectivity offers the freedom to engage from anywhere at any time. Nevertheless, it is challenging to ensure seamless wireless connectivity that delivers real-time and high-quality videos to the VR users. This paper proposes a field of view (FoV) aware caching for mobile edge computing (MEC)-enabled wireless VR network. In particular, the FoV of each VR user is cached/prefetched at the base stations (BSs) based on the caching strategies tailored to each BS. Specifically, decentralized and personalized federated learning (DP-FL) based caching strategies with guarantees are presented. Considering VR systems composed of multiple VR devices and BSs, a DP-FL caching algorithm is implemented at each BS to personalize content delivery for VR users. The utilized DP-FL algorithm guarantees a probably approximately correct (PAC) bound on the conditional average cache hit. Further, to reduce the cost of communicating gradients, one-bit quantization of the stochastic gradient descent (OBSGD) is proposed, and a convergence guarantee of $\mathcal{O}(1/\sqrt{T})$ is obtained for the proposed algorithm, where $T$ is the number of iterations. Additionally, to better account for the wireless channel dynamics, the FoVs are grouped into multicast or unicast groups based on the number of requesting VR users. The performance of the proposed DP-FL algorithm is validated through realistic VR head-tracking dataset, and the proposed algorithm is shown to have better performance in terms of average delay and cache hit as compared to baseline algorithms.


UAV-assisted Unbiased Hierarchical Federated Learning: Performance and Convergence Analysis

arXiv.org Artificial Intelligence

The development of the sixth generation (6G) of wireless networks is bound to streamline the transition of computation and learning towards the edge of the network. Hierarchical federated learning (HFL) becomes, therefore, a key paradigm to distribute learning across edge devices to reach global intelligence. In HFL, each edge device trains a local model using its respective data and transmits the updated model parameters to an edge server for local aggregation. The edge server, then, transmits the locally aggregated parameters to a central server for global model aggregation. The unreliability of communication channels at the edge and backhaul links, however, remains a bottleneck in assessing the true benefit of HFL-empowered systems. To this end, this paper proposes an unbiased HFL algorithm for unmanned aerial vehicle (UAV)-assisted wireless networks that counteracts the impact of unreliable channels by adjusting the update weights during local and global aggregations at UAVs and terrestrial base stations (BS), respectively. To best characterize the unreliability of the channels involved in HFL, we adopt tools from stochastic geometry to determine the success probabilities of the local and global model parameter transmissions. Accounting for such metrics in the proposed HFL algorithm aims at removing the bias towards devices with better channel conditions in the context of the considered UAV-assisted network.. The paper further examines the theoretical convergence guarantee of the proposed unbiased UAV-assisted HFL algorithm under adverse channel conditions. One of the developed approach's additional benefits is that it allows for optimizing and designing the system parameters, e.g., the number of UAVs and their corresponding heights. The paper results particularly highlight the effectiveness of the proposed unbiased HFL scheme as compared to conventional FL and HFL algorithms.


Robust Communication and Computation using Deep Learning via Joint Uncertainty Injection

arXiv.org Artificial Intelligence

The convergence of communication and computation, along with the integration of machine learning and artificial intelligence, stand as key empowering pillars for the sixth-generation of communication systems (6G). This paper considers a network of one base station serving a number of devices simultaneously using spatial multiplexing. The paper then presents an innovative deep learning-based approach to simultaneously manage the transmit and computing powers, alongside computation allocation, amidst uncertainties in both channel and computing states information. More specifically, the paper aims at proposing a robust solution that minimizes the worst-case delay across the served devices subject to computation and power constraints. The paper uses a deep neural network (DNN)-based solution that maps estimated channels and computation requirements to optimized resource allocations. During training, uncertainty samples are injected after the DNN output to jointly account for both communication and computation estimation errors. The DNN is then trained via backpropagation using the robust utility, thus implicitly learning the uncertainty distributions. Our results validate the enhanced robust delay performance of the joint uncertainty injection versus the classical DNN approach, especially in high channel and computational uncertainty regimes.


Manipulating Predictions over Discrete Inputs in Machine Teaching

arXiv.org Artificial Intelligence

Machine teaching often involves the creation of an optimal (typically minimal) dataset to help a model (referred to as the `student') achieve specific goals given by a teacher. While abundant in the continuous domain, the studies on the effectiveness of machine teaching in the discrete domain are relatively limited. This paper focuses on machine teaching in the discrete domain, specifically on manipulating student models' predictions based on the goals of teachers via changing the training data efficiently. We formulate this task as a combinatorial optimization problem and solve it by proposing an iterative searching algorithm. Our algorithm demonstrates significant numerical merit in the scenarios where a teacher attempts at correcting erroneous predictions to improve the student's models, or maliciously manipulating the model to misclassify some specific samples to the target class aligned with his personal profits. Experimental results show that our proposed algorithm can have superior performance in effectively and efficiently manipulating the predictions of the model, surpassing conventional baselines.


Characterization of the Global Bias Problem in Aerial Federated Learning

arXiv.org Artificial Intelligence

Unmanned aerial vehicles (UAVs) mobility enables flexible and customized federated learning (FL) at the network edge. However, the underlying uncertainties in the aerial-terrestrial wireless channel may lead to a biased FL model. In particular, the distribution of the global model and the aggregation of the local updates within the FL learning rounds at the UAVs are governed by the reliability of the wireless channel. This creates an undesirable bias towards the training data of ground devices with better channel conditions, and vice versa. This paper characterizes the global bias problem of aerial FL in large-scale UAV networks. To this end, the paper proposes a channel-aware distribution and aggregation scheme to enforce equal contribution from all devices in the FL training as a means to resolve the global bias problem. We demonstrate the convergence of the proposed method by experimenting with the MNIST dataset and show its superiority compared to existing methods. The obtained results enable system parameter tuning to relieve the impact of the aerial channel deficiency on the FL convergence rate.


Machine Learning-Based User Scheduling in Integrated Satellite-HAPS-Ground Networks

arXiv.org Artificial Intelligence

Integrated space-air-ground networks promise to offer a valuable solution space for empowering the sixth generation of communication networks (6G), particularly in the context of connecting the unconnected and ultraconnecting the connected. Such digital inclusion thrive makes resource management problems, especially those accounting for load-balancing considerations, of particular interest. The conventional model-based optimization methods, however, often fail to meet the real-time processing and quality-of-service needs, due to the high heterogeneity of the space-air-ground networks, and the typical complexity of the classical algorithms. Given the premises of artificial intelligence at automating wireless networks design and the large-scale heterogeneity of non-terrestrial networks, this paper focuses on showcasing the prospects of machine learning in the context of user scheduling in integrated space-air-ground communications. The paper first overviews the most relevant state-of-the art in the context of machine learning applications to the resource allocation problems, with a dedicated attention to space-air-ground networks. The paper then proposes, and shows the benefit of, one specific use case that uses ensembling deep neural networks for optimizing the user scheduling policies in integrated space-high altitude platform station (HAPS)-ground networks. Finally, the paper sheds light on the challenges and open issues that promise to spur the integration of machine learning in space-air-ground networks, namely, online HAPS power adaptation, learning-based channel sensing, data-driven multi-HAPSs resource management, and intelligent flying taxis-empowered systems.


Weight Vector Tuning and Asymptotic Analysis of Binary Linear Classifiers

arXiv.org Machine Learning

Unlike its intercept, a linear classifier's weight vector cannot be tuned by a simple grid search. Hence, this paper proposes weight vector tuning of a generic binary linear classifier through the parameterization of a decomposition of the discriminant by a scalar which controls the trade-off between conflicting informative and noisy terms. By varying this parameter, the original weight vector is modified in a meaningful way. Applying this method to a number of linear classifiers under a variety of data dimensionality and sample size settings reveals that the classification performance loss due to non-optimal native hyperparameters can be compensated for by weight vector tuning. This yields computational savings as the proposed tuning method reduces to tuning a scalar compared to tuning the native hyperparameter, which may involve repeated weight vector generation along with its burden of optimization, dimensionality reduction, etc., depending on the classifier. It is also found that weight vector tuning significantly improves the performance of Linear Discriminant Analysis (LDA) under high estimation noise. Proceeding from this second finding, an asymptotic study of the misclassification probability of the parameterized LDA classifier in the growth regime where the data dimensionality and sample size are comparable is conducted. Using random matrix theory, the misclassification probability is shown to converge to a quantity that is a function of the true statistics of the data. Additionally, an estimator of the misclassification probability is derived. Finally, computationally efficient tuning of the parameter using this estimator is demonstrated on real data. Alouni, and T. Y. Al-Naffouri are with the Electrical and Computer Engineering Program, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia; emails: {lama.niyazi,


Signal Processing and Machine Learning Techniques for Terahertz Sensing: An Overview

arXiv.org Artificial Intelligence

Following the recent progress in Terahertz (THz) signal generation and radiation methods, joint THz communications and sensing applications are shaping the future of wireless systems. Towards this end, THz spectroscopy is expected to be carried over user equipment devices to identify material and gaseous components of interest. THz-specific signal processing techniques should complement this re-surged interest in THz sensing for efficient utilization of the THz band. In this paper, we present an overview of these techniques, with an emphasis on signal pre-processing (standard normal variate normalization, min-max normalization, and Savitzky-Golay filtering), feature extraction (principal component analysis, partial least squares, t-distributed stochastic neighbor embedding, and nonnegative matrix factorization), and classification techniques (support vector machines, k-nearest neighbor, discriminant analysis, and naive Bayes). We also address the effectiveness of deep learning techniques by exploring their promising sensing capabilities at the THz band. Lastly, we investigate the performance and complexity trade-offs of the studied methods in the context of joint communications and sensing; we motivate the corresponding use-cases, and we present few future research directions in the field.