Goto

Collaborating Authors

Results


Traditional vs Deep Learning Algorithms in the Telecom Industry -- Cloud Architecture and Algorithm Categorization

#artificialintelligence

The unprecedented growth of mobile devices, applications and services have placed the utmost demand on mobile and wireless networking infrastructure. Rapid research and development of 5G systems have found ways to support mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Moreover inference from heterogeneous mobile data from distributed devices experiences challenges due to computational and battery power limitations. ML models employed at the edge-servers are constrained to light-weight to boost model performance by achieving a trade-off between model complexity and accuracy. Also, model compression, pruning, and quantization are largely in place.


Network Monitoring Meets Deep Learning

#artificialintelligence

People are creatures of habit. Your morning routine, what you order at Starbucks, the route you take to work, and your late night snacking are all driven by habits. These habits also appear in the digital world. How you browse social media, what sites you visit, and in today's modern environment, how you conduct work. When habits change in the physical world, it can be a sign of something good.


Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge Intelligence

arXiv.org Artificial Intelligence

Mobile edge computing (MEC) is considered a novel paradigm for computation-intensive and delay-sensitive tasks in fifth generation (5G) networks and beyond. However, its uncertainty, referred to as dynamic and randomness, from the mobile device, wireless channel, and edge network sides, results in high-dimensional, nonconvex, nonlinear, and NP-hard optimization problems. Thanks to the evolved reinforcement learning (RL), upon iteratively interacting with the dynamic and random environment, its trained agent can intelligently obtain the optimal policy in MEC. Furthermore, its evolved versions, such as deep RL (DRL), can achieve higher convergence speed efficiency and learning accuracy based on the parametric approximation for the large-scale state-action space. This paper provides a comprehensive research review on RL-enabled MEC and offers insight for development in this area. More importantly, associated with free mobility, dynamic channels, and distributed services, the MEC challenges that can be solved by different kinds of RL algorithms are identified, followed by how they can be solved by RL solutions in diverse mobile applications. Finally, the open challenges are discussed to provide helpful guidance for future research in RL training and learning MEC.


VGAER: graph neural network reconstruction based community detection

arXiv.org Artificial Intelligence

Community detection is a fundamental and important issue in network science, but there are only a few community detection algorithms based on graph neural networks, among which unsupervised algorithms are almost blank. By fusing the high-order modularity information with network features, this paper proposes a Variational Graph AutoEncoder Reconstruction based community detection VGAER for the first time, and gives its non-probabilistic version. They do not need any prior information. We have carefully designed corresponding input features, decoder, and downstream tasks based on the community detection task and these designs are concise, natural, and perform well (NMI values under our design are improved by 59.1% - 565.9%). Based on a series of experiments with wide range of datasets and advanced methods, VGAER has achieved superior performance and shows strong competitiveness and potential with a simpler design. Finally, we report the results of algorithm convergence analysis and t-SNE visualization, which clearly depicted the stable performance and powerful network modularity ability of VGAER. Our codes are available at https://github.com/qcydm/VGAER.


Machine Learning and Artificial Intelligence in Next-Generation Wireless Network

arXiv.org Artificial Intelligence

Due to the advancement in technologies, the next-generation wireless network will be very diverse, complicated, and according to the changed demands of the consumers. The current network operator methodologies and approaches are traditional and cannot help the next generation networks to utilize their resources most appropriately. The limited capability of the traditional tools will not allow the network providers to fulfill the demands of the network's subscribers in the future. Therefore, this paper will focus on machine learning, automation, artificial intelligence, and big data analytics for improving the capacity and effectiveness of next-generation wireless networks. The paper will discuss the role of these new technologies in improving the service and performance of the network providers in the future. The paper will find out that machine learning, big data analytics, and artificial intelligence will help in making the next-generation wireless network self-adaptive, self-aware, prescriptive, and proactive. At the end of the paper, it will be provided that future wireless network operators cannot work without shifting their operational framework to AI and machine learning technologies.


PhD position Deep learning models for water network monitoring (1.0 FTE)

#artificialintelligence

The DiTEC (Digital Twin for Evolutionary Changes in water networks) project proposes an evolutionary approach to real-time monitoring of sensor-rich critical infrastructures that detects inconsistency between measured sensor data and the expected situation, and performs real-time model update without needing additional calibration. Deep learning will be applied to create a data-driven simulation of the system. The system is applied to water networks, where, in case of leaks, valve degradation or sensor faults, the model will be adapted to the degraded network until the maintenance takes place, which can take a long time. The project will analyse the effect on data readings of different malfunctions, and construct a mitigating mechanism that allows to continue using the data, albeit in a limited capacity. As part of the DiTEC project, the role of the PhD student will be to analyse historical and real-time sensor data, which includes parameters such as water speed, pressure, quality, network topology, and construct a number of deep learning (such as CNN and LSTM) models to explain and predict the behavior of the network short and long term.


Modeling Live Video Streaming: Real-Time Classification, QoE Inference, and Field Evaluation

arXiv.org Artificial Intelligence

Social media, professional sports, and video games are driving rapid growth in live video streaming, on platforms such as Twitch and YouTube Live. Live streaming experience is very susceptible to short-time-scale network congestion since client playback buffers are often no more than a few seconds. Unfortunately, identifying such streams and measuring their QoE for network management is challenging, since content providers largely use the same delivery infrastructure for live and video-on-demand (VoD) streaming, and packet inspection techniques (including SNI/DNS query monitoring) cannot always distinguish between the two. In this paper, we design, build, and deploy ReCLive: a machine learning method for live video detection and QoE measurement based on network-level behavioral characteristics. Our contributions are four-fold: (1) We analyze about 23,000 video streams from Twitch and YouTube, and identify key features in their traffic profile that differentiate live and on-demand streaming. We release our traffic traces as open data to the public; (2) We develop an LSTM-based binary classifier model that distinguishes live from on-demand streams in real-time with over 95% accuracy across providers; (3) We develop a method that estimates QoE metrics of live streaming flows in terms of resolution and buffer stall events with overall accuracies of 93% and 90%, respectively; and (4) Finally, we prototype our solution, train it in the lab, and deploy it in a live ISP network serving more than 7,000 subscribers. Our method provides ISPs with fine-grained visibility into live video streams, enabling them to measure and improve user experience.


Predicting Bandwidth Utilization on Network Links Using Machine Learning

arXiv.org Artificial Intelligence

Predicting the bandwidth utilization on network links can be extremely useful for detecting congestion in order to correct them before they occur. In this paper, we present a solution to predict the bandwidth utilization between different network links with a very high accuracy. A simulated network is created to collect data related to the performance of the network links on every interface. These data are processed and expanded with feature engineering in order to create a training set. We evaluate and compare three types of machine learning algorithms, namely ARIMA (AutoRegressive Integrated Moving Average), MLP (Multi Layer Perceptron) and LSTM (Long Short-Term Memory), in order to predict the future bandwidth consumption. The LSTM outperforms ARIMA and MLP with very accurate predictions, rarely exceeding a 3\% error (40\% for ARIMA and 20\% for the MLP). We then show that the proposed solution can be used in real time with a reaction managed by a Software-Defined Networking (SDN) platform.


Edge-Native Intelligence for 6G Communications Driven by Federated Learning: A Survey of Trends and Challenges

arXiv.org Artificial Intelligence

The unprecedented surge of data volume in wireless networks empowered with artificial intelligence (AI) opens up new horizons for providing ubiquitous data-driven intelligent services. Traditional cloud-centric machine learning (ML)-based services are implemented by collecting datasets and training models centrally. However, this conventional training technique encompasses two challenges: (i) high communication and energy cost due to increased data communication, (ii) threatened data privacy by allowing untrusted parties to utilise this information. Recently, in light of these limitations, a new emerging technique, coined as federated learning (FL), arose to bring ML to the edge of wireless networks. FL can extract the benefits of data silos by training a global model in a distributed manner, orchestrated by the FL server. FL exploits both decentralised datasets and computing resources of participating clients to develop a generalised ML model without compromising data privacy. In this article, we introduce a comprehensive survey of the fundamentals and enabling technologies of FL. Moreover, an extensive study is presented detailing various applications of FL in wireless networks and highlighting their challenges and limitations. The efficacy of FL is further explored with emerging prospective beyond fifth generation (B5G) and sixth generation (6G) communication systems. The purpose of this survey is to provide an overview of the state-of-the-art of FL applications in key wireless technologies that will serve as a foundation to establish a firm understanding of the topic. Lastly, we offer a road forward for future research directions.


Applications of Multi-Agent Reinforcement Learning in Future Internet: A Comprehensive Survey

arXiv.org Artificial Intelligence

Future Internet involves several emerging technologies such as 5G and beyond 5G networks, vehicular networks, unmanned aerial vehicle (UAV) networks, and Internet of Things (IoTs). Moreover, future Internet becomes heterogeneous and decentralized with a large number of involved network entities. Each entity may need to make its local decision to improve the network performance under dynamic and uncertain network environments. Standard learning algorithms such as single-agent Reinforcement Learning (RL) or Deep Reinforcement Learning (DRL) have been recently used to enable each network entity as an agent to learn an optimal decision-making policy adaptively through interacting with the unknown environments. However, such an algorithm fails to model the cooperations or competitions among network entities, and simply treats other entities as a part of the environment that may result in the non-stationarity issue. Multi-agent Reinforcement Learning (MARL) allows each network entity to learn its optimal policy by observing not only the environments, but also other entities' policies. As a result, MARL can significantly improve the learning efficiency of the network entities, and it has been recently used to solve various issues in the emerging networks. In this paper, we thus review the applications of MARL in the emerging networks. In particular, we provide a tutorial of MARL and a comprehensive survey of applications of MARL in next generation Internet. In particular, we first introduce single-agent RL and MARL. Then, we review a number of applications of MARL to solve emerging issues in future Internet. The issues consist of network access, transmit power control, computation offloading, content caching, packet routing, trajectory design for UAV-aided networks, and network security issues.