Imran, Muhammad Ali
Use of Parallel Explanatory Models to Enhance Transparency of Neural Network Configurations for Cell Degradation Detection
Mulvey, David, Foh, Chuan Heng, Imran, Muhammad Ali, Tafazolli, Rahim
Abstract--In a previous paper, we have shown that a recurrent neural network (RNN) can be used to detect cellular network radio signal degradations accurately. We unexpectedly fou nd, though, that accuracy gains diminished as we added layers to the RNN. T o investigate this, in this paper, we build a parall el model to illuminate and understand the internal operation o f neural networks, such as the RNN, which store their internal state in order to process sequential inputs. This model is wi dely applicable in that it can be used with any input domain where the inputs can be represented by a Gaussian mixture. By looki ng at the RNN processing from a probability density function perspective, we are able to show how each layer of the RNN transforms the input distributions to increase detection a ccuracy. At the same time we also discover a side effect acting to limit the improvement in accuracy. T o demonstrate the fidelity of t he model we validate it against each stage of RNN processing as well as the output predictions. As a result, we have been able to explain the reasons for the RNN performance limits with usef ul insights for future designs for RNNs and similar types of neu ral network. In the latest generation of cellular networks, 5G, the emergence of sophisticated new techniques such as large scale MIMO and multicarrier operation has resulted in rapid growth in the total number of radio access network (RAN) configuration parameters. This carries with it a considerab le risk in terms of potential misconfiguration and is likely to significantly add to the workload for network management teams. Fortunately the recent emergence of powerful machin e learning techniques has the potential to counter this by ale rting operators to issues which might not otherwise be apparent an d providing assistance to resolve them in a timely manner. In our earlier work [1], we showed that it is possible to apply a recurrent neural network (RNN) to address an issue of particular concern to mobile network operators, namely how to detect cell performance degradations which are not being reported to the network control centre but are impairi ng the quality of service perceived by the users.
Intelligent Mode-switching Framework for Teleoperation
Kizilkaya, Burak, She, Changyang, Zhao, Guodong, Imran, Muhammad Ali
Teleoperation can be very difficult due to limited perception, high communication latency, and limited degrees of freedom (DoFs) at the operator side. Autonomous teleoperation is proposed to overcome this difficulty by predicting user intentions and performing some parts of the task autonomously to decrease the demand on the operator and increase the task completion rate. However, decision-making for mode-switching is generally assumed to be done by the operator, which brings an extra DoF to be controlled by the operator and introduces extra mental demand. On the other hand, the communication perspective is not investigated in the current literature, although communication imperfections and resource limitations are the main bottlenecks for teleoperation. In this study, we propose an intelligent mode-switching framework by jointly considering mode-switching and communication systems. User intention recognition is done at the operator side. Based on user intention recognition, a deep reinforcement learning (DRL) agent is trained and deployed at the operator side to seamlessly switch between autonomous and teleoperation modes. A real-world data set is collected from our teleoperation testbed to train both user intention recognition and DRL algorithms. Our results show that the proposed framework can achieve up to 50% communication load reduction with improved task completion probability.
Blockchain-enabled Clustered and Scalable Federated Learning (BCS-FL) Framework in UAV Networks
Hafeez, Sana, Mohjazi, Lina, Imran, Muhammad Ali, Sun, Yao
Privacy, scalability, and reliability are significant challenges in unmanned aerial vehicle (UAV) networks as distributed systems, especially when employing machine learning (ML) technologies with substantial data exchange. Recently, the application of federated learning (FL) to UAV networks has improved collaboration, privacy, resilience, and adaptability, making it a promising framework for UAV applications. However, implementing FL for UAV networks introduces drawbacks such as communication overhead, synchronization issues, scalability limitations, and resource constraints. To address these challenges, this paper presents the Blockchain-enabled Clustered and Scalable Federated Learning (BCS-FL) framework for UAV networks. This improves the decentralization, coordination, scalability, and efficiency of FL in large-scale UAV networks. The framework partitions UAV networks into separate clusters, coordinated by cluster head UAVs (CHs), to establish a connected graph. Clustering enables efficient coordination of updates to the ML model. Additionally, hybrid inter-cluster and intra-cluster model aggregation schemes generate the global model after each training round, improving collaboration and knowledge sharing among clusters. The numerical findings illustrate the achievement of convergence while also emphasizing the trade-offs between the effectiveness of training and communication efficiency.
A Wireless AI-Generated Content (AIGC) Provisioning Framework Empowered by Semantic Communication
Cheng, Runze, Sun, Yao, Niyato, Dusit, Zhang, Lan, Zhang, Lei, Imran, Muhammad Ali
Generative AI applications are recently catering to a vast user base by creating diverse and high-quality AI-generated content (AIGC). With the proliferation of mobile devices and rapid growth of mobile traffic, providing ubiquitous access to high-quality AIGC services via wireless communication networks is becoming the future direction for AIGC products. However, it is challenging to provide optimal AIGC services in wireless networks with unstable channels, limited bandwidth resources, and unevenly distributed computational resources. To tackle these challenges, we propose a semantic communication (SemCom)-empowered AIGC (SemAIGC) generation and transmission framework, where only semantic information of the content rather than all the binary bits should be extracted and transmitted by using SemCom. Specifically, SemAIGC integrates diffusion-based models within the semantic encoder and decoder for efficient content generation and flexible adjustment of the computing workload of both transmitter and receiver. Meanwhile, we devise a resource-aware workload trade-off (ROOT) scheme into the SemAIGC framework to intelligently decide transmitter/receiver workload, thus adjusting the utilization of computational resource according to service requirements. Simulations verify the superiority of our proposed SemAIGC framework in terms of latency and content quality compared to conventional approaches.
Task-Oriented Cross-System Design for Timely and Accurate Modeling in the Metaverse
Meng, Zhen, Chen, Kan, Diao, Yufeng, She, Changyang, Zhao, Guodong, Imran, Muhammad Ali, Vucetic, Branka
In this paper, we establish a task-oriented cross-system design framework to minimize the required packet rate for timely and accurate modeling of a real-world robotic arm in the Metaverse, where sensing, communication, prediction, control, and rendering are considered. To optimize a scheduling policy and prediction horizons, we design a Constraint Proximal Policy Optimization(C-PPO) algorithm by integrating domain knowledge from relevant systems into the advanced reinforcement learning algorithm, Proximal Policy Optimization(PPO). Specifically, the Jacobian matrix for analyzing the motion of the robotic arm is included in the state of the C-PPO algorithm, and the Conditional Value-at-Risk(CVaR) of the state-value function characterizing the long-term modeling error is adopted in the constraint. Besides, the policy is represented by a two-branch neural network determining the scheduling policy and the prediction horizons, respectively. To evaluate our algorithm, we build a prototype including a real-world robotic arm and its digital model in the Metaverse. The experimental results indicate that domain knowledge helps to reduce the convergence time and the required packet rate by up to 50%, and the cross-system design framework outperforms a baseline framework in terms of the required packet rate and the tail distribution of the modeling error.
Enhancing Reliability in Federated mmWave Networks: A Practical and Scalable Solution using Radar-Aided Dynamic Blockage Recognition
Al-Quraan, Mohammad, Zoha, Ahmed, Centeno, Anthony, Salameh, Haythem Bany, Muhaidat, Sami, Imran, Muhammad Ali, Mohjazi, Lina
This article introduces a new method to improve the dependability of millimeter-wave (mmWave) and terahertz (THz) network services in dynamic outdoor environments. In these settings, line-of-sight (LoS) connections are easily interrupted by moving obstacles like humans and vehicles. The proposed approach, coined as Radar-aided Dynamic blockage Recognition (RaDaR), leverages radar measurements and federated learning (FL) to train a dual-output neural network (NN) model capable of simultaneously predicting blockage status and time. This enables determining the optimal point for proactive handover (PHO) or beam switching, thereby reducing the latency introduced by 5G new radio procedures and ensuring high quality of experience (QoE). The framework employs radar sensors to monitor and track objects movement, generating range-angle and range-velocity maps that are useful for scene analysis and predictions. Moreover, FL provides additional benefits such as privacy protection, scalability, and knowledge sharing. The framework is assessed using an extensive real-world dataset comprising mmWave channel information and radar data. The evaluation results show that RaDaR substantially enhances network reliability, achieving an average success rate of 94% for PHO compared to existing reactive HO procedures that lack proactive blockage prediction. Additionally, RaDaR maintains a superior QoE by ensuring sustained high throughput levels and minimising PHO latency.
WiserVR: Semantic Communication Enabled Wireless Virtual Reality Delivery
Xia, Le, Sun, Yao, Liang, Chengsi, Feng, Daquan, Cheng, Runze, Yang, Yang, Imran, Muhammad Ali
Virtual reality (VR) over wireless is expected to be one of the killer applications in next-generation communication networks. Nevertheless, the huge data volume along with stringent requirements on latency and reliability under limited bandwidth resources makes untethered wireless VR delivery increasingly challenging. Such bottlenecks, therefore, motivate this work to seek the potential of using semantic communication, a new paradigm that promises to significantly ease the resource pressure, for efficient VR delivery. To this end, we propose a novel framework, namely WIreless SEmantic deliveRy for VR (WiserVR), for delivering consecutive 360{\deg} video frames to VR users. Specifically, deep learning-based multiple modules are well-devised for the transceiver in WiserVR to realize high-performance feature extraction and semantic recovery. Among them, we dedicatedly develop a concept of semantic location graph and leverage the joint-semantic-channel-coding method with knowledge sharing to not only substantially reduce communication latency, but also to guarantee adequate transmission reliability and resilience under various channel states. Moreover, implementation of WiserVR is presented, followed by corresponding initial simulations for performance evaluation compared with benchmarks. Finally, we discuss several open issues and offer feasible solutions to unlock the full potential of WiserVR.
Task-Oriented Prediction and Communication Co-Design for Haptic Communications
Kizilkaya, Burak, She, Changyang, Zhao, Guodong, Imran, Muhammad Ali
Prediction has recently been considered as a promising approach to meet low-latency and high-reliability requirements in long-distance haptic communications. However, most of the existing methods did not take features of tasks and the relationship between prediction and communication into account. In this paper, we propose a task-oriented prediction and communication co-design framework, where the reliability of the system depends on prediction errors and packet losses in communications. The goal is to minimize the required radio resources subject to the low-latency and high-reliability requirements of various tasks. Specifically, we consider the just noticeable difference (JND) as a performance metric for the haptic communication system. We collect experiment data from a real-world teleoperation testbed and use time-series generative adversarial networks (TimeGAN) to generate a large amount of synthetic data. This allows us to obtain the relationship between the JND threshold, prediction horizon, and the overall reliability including communication reliability and prediction reliability. We take 5G New Radio as an example to demonstrate the proposed framework and optimize bandwidth allocation and data rates of devices. Our numerical and experimental results show that the proposed framework can reduce wireless resource consumption up to 77.80% compared with a task-agnostic benchmark.
Edge-Native Intelligence for 6G Communications Driven by Federated Learning: A Survey of Trends and Challenges
Al-Quraan, Mohammad, Mohjazi, Lina, Bariah, Lina, Centeno, Anthony, Zoha, Ahmed, Muhaidat, Sami, Debbah, Mérouane, Imran, Muhammad Ali
The unprecedented surge of data volume in wireless networks empowered with artificial intelligence (AI) opens up new horizons for providing ubiquitous data-driven intelligent services. Traditional cloud-centric machine learning (ML)-based services are implemented by collecting datasets and training models centrally. However, this conventional training technique encompasses two challenges: (i) high communication and energy cost due to increased data communication, (ii) threatened data privacy by allowing untrusted parties to utilise this information. Recently, in light of these limitations, a new emerging technique, coined as federated learning (FL), arose to bring ML to the edge of wireless networks. FL can extract the benefits of data silos by training a global model in a distributed manner, orchestrated by the FL server. FL exploits both decentralised datasets and computing resources of participating clients to develop a generalised ML model without compromising data privacy. In this article, we introduce a comprehensive survey of the fundamentals and enabling technologies of FL. Moreover, an extensive study is presented detailing various applications of FL in wireless networks and highlighting their challenges and limitations. The efficacy of FL is further explored with emerging prospective beyond fifth generation (B5G) and sixth generation (6G) communication systems. The purpose of this survey is to provide an overview of the state-of-the-art of FL applications in key wireless technologies that will serve as a foundation to establish a firm understanding of the topic. Lastly, we offer a road forward for future research directions.
Smart and Secure CAV Networks Empowered by AI-Enabled Blockchain: Next Frontier for Intelligent Safe-Driving Assessment
Xia, Le, Sun, Yao, Swash, Rafiq, Mohjazi, Lina, Zhang, Lei, Imran, Muhammad Ali
Securing a safe-driving circumstance for connected and autonomous vehicles (CAVs) continues to be a widespread concern despite various sophisticated functions delivered by artificial intelligence for in-vehicle devices. Besides, diverse malicious network attacks become ubiquitous along with the worldwide implementation of the Internet of Vehicles, which exposes a range of reliability and privacy threats for managing data in CAV networks. Combined with another fact that CAVs are now limited in handling intensive computation tasks, it thus renders a pressing demand of designing an efficient assessment system to guarantee autonomous driving safety without compromising data security. To this end, we propose in this article a novel framework of Blockchain-enabled intElligent Safe-driving assessmenT (BEST) to offer a smart and reliable approach for conducting safe driving supervision while protecting vehicular information. Specifically, a promising solution of exploiting a long short-term memory algorithm is first introduced in detail for an intElligent Safe-driving assessmenT (EST) scheme. To further facilitate the EST, we demonstrate how a distributed blockchain obtains adequate efficiency, trustworthiness and resilience with an adopted byzantine fault tolerance-based delegated proof-of-stake consensus mechanism. Moreover, several challenges and discussions regarding the future research of this BEST architecture are presented.