Goto

Collaborating Authors

Results


AI in Telecom -- Ripe for Innovation

#artificialintelligence

From 2021 to 2028, the worldwide telecom services industry will increase at a compound growth rate of 5.4%. By 2025, the market for Telecom Equipment is expected to develop at a rate of 11.23%. One of the main aspects fuelling this market is an increased investment in 5G infrastructure deployment due to a shift in customer preference for next-generation technologies and smartphone devices. Increased need for value-added managed services, a growing number of mobile users, and surging demand for high-speed data connectivity are all major market drivers. Over the last few decades, the global communication network has clearly been one of the most important areas for continuing technical advancement.


AI in Telecom - Ripe for Innovation

#artificialintelligence

From 2021 to 2028, the worldwide telecom services industry will increase at a compound growth rate of 5.4%. By 2025, the market for Telecom Equipment is expected to develop at a rate of 11.23%. One of the main aspects fuelling this market is an increased investment in 5G infrastructure deployment due to a shift in customer preference for next-generation technologies and smartphone devices. Increased need for value-added managed services, a growing number of mobile users, and surging demand for high-speed data connectivity are all major market drivers. Over the last few decades, the global communication network has clearly been one of the most important areas for continuing technical advancement.


Machine Learning Assisted Security Analysis of 5G-Network-Connected Systems

arXiv.org Artificial Intelligence

The core network architecture of telecommunication systems has undergone a paradigm shift in the fifth-generation (5G)networks. 5G networks have transitioned to software-defined infrastructures, thereby reducing their dependence on hardware-based network functions. New technologies, like network function virtualization and software-defined networking, have been incorporated in the 5G core network (5GCN) architecture to enable this transition. This has resulted in significant improvements in efficiency, performance, and robustness of the networks. However, this has also made the core network more vulnerable, as software systems are generally easier to compromise than hardware systems. In this article, we present a comprehensive security analysis framework for the 5GCN. The novelty of this approach lies in the creation and analysis of attack graphs of the software-defined and virtualized 5GCN through machine learning. This analysis points to 119 novel possible exploits in the 5GCN. We demonstrate that these possible exploits of 5GCN vulnerabilities generate five novel attacks on the 5G Authentication and Key Agreement protocol. We combine the attacks at the network, protocol, and the application layers to generate complex attack vectors. In a case study, we use these attack vectors to find four novel security loopholes in WhatsApp running on a 5G network.


Distributed Learning for Time-varying Networks: A Scalable Design

arXiv.org Artificial Intelligence

The wireless network is undergoing a trend from "onnection of things" to "connection of intelligence". With data spread over the communication networks and computing capability enhanced on the devices, distributed learning becomes a hot topic in both industrial and academic communities. Many frameworks, such as federated learning and federated distillation, have been proposed. However, few of them takes good care of obstacles such as the time-varying topology resulted by the characteristics of wireless networks. In this paper, we propose a distributed learning framework based on a scalable deep neural network (DNN) design. By exploiting the permutation equivalence and invariance properties of the learning tasks, the DNNs with different scales for different clients can be built up based on two basic parameter sub-matrices. Further, model aggregation can also be conducted based on these two sub-matrices to improve the learning convergence and performance. Finally, simulation results verify the benefits of the proposed framework by compared with some baselines.


How cloud computing can improve 5G wireless networks

#artificialintelligence

A great deal has been written about the technologies fueling 5G, especially how those technologies will improve the experience that users have regarding connectivity. Similarly, much has been said about how ongoing developments in technology will usher in a new generation of network-aware applications. In this article, we discuss one key aspect of 5G technology and how it will impact the development of wireless network capacity. This is one of the more important but often neglected aspects of wireless communication evolution. It represents yet another important reason why the convergence of cloud computing and wireless communications makes so much sense.


Packet Routing with Graph Attention Multi-agent Reinforcement Learning

arXiv.org Artificial Intelligence

Packet routing is a fundamental problem in communication networks that decides how the packets are directed from their source nodes to their destination nodes through some intermediate nodes. With the increasing complexity of network topology and highly dynamic traffic demand, conventional model-based and rule-based routing schemes show significant limitations, due to the simplified and unrealistic model assumptions, and lack of flexibility and adaption. Adding intelligence to the network control is becoming a trend and the key to achieving high-efficiency network operation. In this paper, we develop a model-free and data-driven routing strategy by leveraging reinforcement learning (RL), where routers interact with the network and learn from the experience to make some good routing configurations for the future. Considering the graph nature of the network topology, we design a multi-agent RL framework in combination with Graph Neural Network (GNN), tailored to the routing problem. Three deployment paradigms, centralized, federated, and cooperated learning, are explored respectively. Simulation results demonstrate that our algorithm outperforms some existing benchmark algorithms in terms of packet transmission delay and affordable load.


The Graph Neural Networking Challenge: A Worldwide Competition for Education in AI/ML for Networks

arXiv.org Artificial Intelligence

During the last decade, Machine Learning (ML) has increasingly become a hot topic in the field of Computer Networks and is expected to be gradually adopted for a plethora of control, monitoring and management tasks in real-world deployments. This poses the need to count on new generations of students, researchers and practitioners with a solid background in ML applied to networks. During 2020, the International Telecommunication Union (ITU) has organized the "ITU AI/ML in 5G challenge'', an open global competition that has introduced to a broad audience some of the current main challenges in ML for networks. This large-scale initiative has gathered 23 different challenges proposed by network operators, equipment manufacturers and academia, and has attracted a total of 1300+ participants from 60+ countries. This paper narrates our experience organizing one of the proposed challenges: the "Graph Neural Networking Challenge 2020''. We describe the problem presented to participants, the tools and resources provided, some organization aspects and participation statistics, an outline of the top-3 awarded solutions, and a summary with some lessons learned during all this journey. As a result, this challenge leaves a curated set of educational resources openly available to anyone interested in the topic.


AI and ML for Open RAN and 5G

#artificialintelligence

Fast, reliable, and low-latency data services are essential deliverables from telecom operators today. Realizing them is pushing operators to enhance infrastructure, expand network capacity and mitigate service degradation. Unlike other industries, though, telecom networks are vast monoliths comprising fiber optic cables, proprietary components, and legacy hardware. Because of this, there is less enhancing--and more shoring up the creaking infrastructure. Radio access networks (RAN) are the backbone of the telecommunications industry. However, the industry's propensity to incubate and evolve newer, cost-effective, and energy-efficient technologies has been slow due to monopolization by RAN component manufacturers.


Spectral goodness-of-fit tests for complete and partial network data

arXiv.org Machine Learning

Networks describe the, often complex, relationships between individual actors. In this work, we address the question of how to determine whether a parametric model, such as a stochastic block model or latent space model, fits a dataset well and will extrapolate to similar data. We use recent results in random matrix theory to derive a general goodness-of-fit test for dyadic data. We show that our method, when applied to a specific model of interest, provides an straightforward, computationally fast way of selecting parameters in a number of commonly used network models. For example, we show how to select the dimension of the latent space in latent space models. Unlike other network goodness-of-fit methods, our general approach does not require simulating from a candidate parametric model, which can be cumbersome with large graphs, and eliminates the need to choose a particular set of statistics on the graph for comparison. It also allows us to perform goodness-of-fit tests on partial network data, such as Aggregated Relational Data. We show with simulations that our method performs well in many situations of interest. We analyze several empirically relevant networks and show that our method leads to improved community detection algorithms. R code to implement our method is available on Github.


Active Learning for Network Traffic Classification: A Technical Survey

arXiv.org Artificial Intelligence

Network Traffic Classification (NTC) has become an important component in a wide variety of network management operations, e.g., Quality of Service (QoS) provisioning and security purposes. Machine Learning (ML) algorithms as a common approach for NTC methods can achieve reasonable accuracy and handle encrypted traffic. However, ML-based NTC techniques suffer from the shortage of labeled traffic data which is the case in many real-world applications. This study investigates the applicability of an active form of ML, called Active Learning (AL), which reduces the need for a high number of labeled examples by actively choosing the instances that should be labeled. The study first provides an overview of NTC and its fundamental challenges along with surveying the literature in the field of using ML techniques in NTC. Then, it introduces the concepts of AL, discusses it in the context of NTC, and review the literature in this field. Further, challenges and open issues in the use of AL for NTC are discussed. Additionally, as a technical survey, some experiments are conducted to show the broad applicability of AL in NTC. The simulation results show that AL can achieve high accuracy with a small amount of data.