Networks
Topology Identification and Inference over Graphs
Mateos, Gonzalo, Shen, Yanning, Giannakis, Georgios B., Swami, Ananthram
Topology identification and inference of processes evolving over graphs arise in timely applications involving brain, transportation, financial, power, as well as social and information networks. This chapter provides an overview of graph topology identification and statistical inference methods for multidimensional relational data. Approaches for undirected links connecting graph nodes are outlined, going all the way from correlation metrics to covariance selection, and revealing ties with smooth signal priors. To account for directional (possibly causal) relations among nodal variables and address the limitations of linear time-invariant models in handling dynamic as well as nonlinear dependencies, a principled framework is surveyed to capture these complexities through judiciously selected kernels from a prescribed dictionary. Generalizations are also described via structural equations and vector autoregressions that can exploit attributes such as low rank, sparsity, acyclicity, and smoothness to model dynamic processes over possibly time-evolving topologies. It is argued that this approach supports both batch and online learning algorithms with convergence rate guarantees, is amenable to tensor (that is, multi-way array) formulations as well as decompositions that are well-suited for multidimensional network data, and can seamlessly leverage high-order statistical information.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > Orange County > Irvine (0.14)
- North America > United States > California > Monterey County > Pacific Grove (0.04)
- (10 more...)
- Banking & Finance (0.67)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Telecommunications > Networks (0.34)
- Information Technology > Networks (0.34)
M3Net: A Multi-Metric Mixture of Experts Network Digital Twin with Graph Neural Networks
Guda, Blessed, Joe-Wong, Carlee
Abstract--The rise of 5G/6G network technologies promises to enable applications like autonomous vehicles and virtual reality, resulting in a significant increase in connected devices and necessarily complicating network management. Even worse, these applications often have strict, yet heterogeneous, performance requirements across metrics like latency and reliability. Much recent work has thus focused on developing the ability to predict network performance. However, traditional methods for network modeling, like discrete event simulators and emulation, often fail to balance accuracy and scalability. Network Digital Twins (NDTs), augmented by machine learning, present a viable solution by creating virtual replicas of physical networks for real-time simulation and analysis. State-of-the-art models, however, fall short of full-fledged NDTs, as they often focus only on a single performance metric or simulated network data. We introduce M3Net, a Multi-Metric Mixture-of-experts (MoE) NDT that uses a graph neural network architecture to estimate multiple performance metrics from an expanded set of network state data in a range of scenarios. We show that M3Net significantly enhances the accuracy of flow delay predictions by reducing the MAPE (Mean Absolute Percentage Error) from 20.06% to 17.39%, while also achieving 66.47% and 78.7% accuracy on jitter and packets dropped for each flow. Emerging 5G and 6G mobile network architectures aim to support new applications like autonomous vehicles and mixed reality [1], [2], both of which require significantly expanded network capabilities. These and other new applications envisioned as part of the 5G and 6G network ecosystem will lead to massive numbers of connected devices with heterogeneous performance expectations, which increases the complexity and cost of managing communication networks [2]. For example, interactive applications like augmented reality generally require response latencies under 200ms [3], while safety-critical applications like autonomous vehicles might require highly reliable delivery of high-priority packets [4].
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Africa > Rwanda > Kigali > Kigali (0.04)
- Research Report > New Finding (0.46)
- Research Report > Promising Solution (0.34)
- Telecommunications > Networks (0.90)
- Information Technology (0.88)
AI/ML in 3GPP 5G Advanced -- Services and Architecture
Taksande, Pradnya, Kiran, Shwetha, Jha, Pranav, Chaporkar, Prasanna
Abstract--The 3rd Generation Partnership Project (3GPP), the standards body for mobile networks, is in the final phase of Release 19 standardization and is beginning Release 20. Artificial Intelligence/ Machine Learning (AI/ML) has brought about a paradigm shift in technology and it is being adopted across industries and verticals. This paper focuses on the AI/ML related technological advancements and features introduced in Release 19 within the Service and System Aspects (SA) T echnical specifications group of 3GPP . The advancements relate to two paradigms: (i) enhancements that AI/ML brought to the 5G advanced system (AI for network), e.g. Artificial Intelligence (AI) and Machine Learning (ML) are transforming numerous industries and multiple aspects of modern life. From personalized recommendations on streaming platforms to real-time fraud detection in banking, AI/ML technologies are driving smarter decision-making across industries. In retail, they assist in inventory and supply chain management. In transportation, automotive vehicles rely on ML for object detection and navigation. As data continues to grow, these technologies are evolving rapidly, reshaping how we work, interact, and solve complex problems, making them central to innovation in today's world.
- Research Report (0.50)
- Overview (0.47)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Networks (0.93)
- Energy (0.93)
- Telecommunications > Networks (0.68)
AQUILA: A QUIC-Based Link Architecture for Resilient Long-Range UAV Communication
The proliferation of autonomous Unmanned Aerial Vehicles (UAVs) in Beyond Visual Line of Sight (BVLOS) applications is critically dependent on resilient, high-bandwidth, and low-latency communication links. Existing solutions face critical limitations: TCP's head-of-line blocking stalls time-sensitive data, UDP lacks reliability and congestion control, and cellular networks designed for terrestrial users degrade severely for aerial platforms. This paper introduces AQUILA, a cross-layer communication architecture built on QUIC to address these challenges. AQUILA contributes three key innovations: (1) a unified transport layer using QUIC's reliable streams for MAVLink Command and Control (C2) and unreliable datagrams for video, eliminating head-of-line blocking under unified congestion control; (2) a priority scheduling mechanism that structurally ensures C2 latency remains bounded and independent of video traffic intensity; (3) a UAV-adapted congestion control algorithm extending SCReAM with altitude-adaptive delay targeting and telemetry headroom reservation. AQUILA further implements 0-RTT connection resumption to minimize handover blackouts with application-layer replay protection, deployed over an IP-native architecture enabling global operation. Experimental validation demonstrates that AQUILA significantly outperforms TCP- and UDP-based approaches in C2 latency, video quality, and link resilience under realistic conditions, providing a robust foundation for autonomous BVLOS missions.
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China > Jiangxi Province > Nanchang (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Security & Privacy (1.00)
- Telecommunications > Networks (0.69)
Beyond Connectivity: An Open Architecture for AI-RAN Convergence in 6G
Polese, Michele, Mohamadi, Niloofar, D'Oro, Salvatore, Bonati, Leonardo, Melodia, Tommaso
Abstract--Data-intensive Artificial Intelligence (AI) applications at the network edge demand a fundamental shift in Radio Access Network (RAN) design, from merely consuming AI for network optimization, to actively enabling distributed AI workloads. This presents a significant opportunity for network operators to monetize AI while leveraging existing infrastructure. T o realize this vision, this article presents a novel converged O-RAN and AI-RAN architecture for unified orchestration and management of telecommunications and AI workloads on shared infrastructure. The proposed architecture extends the Open RAN principles of modularity, disaggregation, and cloud-nativeness to support heterogeneous AI deployments. We introduce two key architectural innovations: (i) the AI-RAN Orchestrator, which extends the O-RAN Service Management and Orchestration (SMO) to enable integrated resource and allocation across RAN and AI workloads; and (ii) AI-RAN sites that provide distributed edge AI platforms with real-time processing capabilities. The proposed architecture enables flexible orchestration, meeting requirements for managing heterogeneous workloads at different time scales while maintaining open, standardized interfaces and multi-vendor interoperability.This paper has been submitted to IEEE for publication. M. Polese, L. Bonati, and T. Melodia are with the Institute for the Wireless Internet of Things, Northeastern University, Boston, MA, USA. This article is based upon work partially supported by the NTIA PWSCIF under A ward No. 25-60-IF054, the U.S. NSF under award CNS-2112471, and by OUSD(R&E) through Army Research Laboratory Cooperative Agreement Number W911NF-24-2-0065.
- North America > United States > Massachusetts > Suffolk County > Boston (0.24)
- North America > Canada > Ontario (0.04)
- Information Technology > Networks (0.48)
- Government > Military > Army (0.34)
- Telecommunications > Networks (0.34)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Systems & Languages > Distributed Architectures (0.35)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.30)
CNN-Enabled Scheduling for Probabilistic Real-Time Guarantees in Industrial URLLC
Alqudah, Eman, Khokhar, Ashfaq
Ensuring packet-level communication quality is vital for ultra-reliable, low-latency communications (URLLC) in large-scale industrial wireless networks. We enhance the Local Deadline Partition (LDP) algorithm by introducing a CNN-based dynamic priority prediction mechanism for improved interference coordination in multi-cell, multi-channel networks. Unlike LDP's static priorities, our approach uses a Convolutional Neural Network and graph coloring to adaptively assign link priorities based on real-time traffic, transmission opportunities, and network conditions. Assuming that first training phase is performed offline, our approach introduced minimal overhead, while enabling more efficient resource allocation, boosting network capacity, SINR, and schedulability. Simulation results show SINR gains of up to 113\%, 94\%, and 49\% over LDP across three network configurations, highlighting its effectiveness for complex URLLC scenarios.
- North America > United States > Iowa (0.04)
- Europe (0.04)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Architecture > Real Time Systems (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.89)
Quantifying the Privacy Implications of High-Fidelity Synthetic Network Traffic
Tran, Van, Liu, Shinan, Li, Tian, Feamster, Nick
To address the scarcity and privacy concerns of network traffic data, various generative models have been developed to produce synthetic traffic. However, synthetic traffic is not inherently privacy-preserving, and the extent to which it leaks sensitive information, and how to measure such leakage, remain largely unexplored. This challenge is further compounded by the diversity of model architectures, which shape how traffic is represented and synthesized. We introduce a comprehensive set of privacy metrics for synthetic network traffic, combining standard approaches like membership inference attacks (MIA) and data extraction attacks with network-specific identifiers and attributes. Using these metrics, we systematically evaluate the vulnerability of different representative generative models and examine the factors that influence attack success. Our results reveal substantial variability in privacy risks across models and datasets. MIA success ranges from 0% to 88%, and up to 100% of network identifiers can be recovered from generated traffic, highlighting serious privacy vulnerabilities. We further identify key factors that significantly affect attack outcomes, including training data diversity and how well the generative model fits the training data. These findings provide actionable guidance for designing and deploying generative models that minimize privacy leakage, establishing a foundation for safer synthetic network traffic generation.
- North America > United States > Illinois > Cook County > Chicago (0.76)
- Asia > China > Hong Kong (0.76)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Telecommunications > Networks (1.00)
- Information Technology > Security & Privacy (1.00)
A Reinforcement Learning-Based Telematic Routing Protocol for the Internet of Underwater Things
Homaei, Mohammadhossein, Tarif, Mehran, Di Bartolo, Agustin, Morales, Victor Gonzalez, Vegas, Mar Avila
The Internet of Underwater Things (IoUT) has a lot of problems, like low bandwidth, high latency, mobility, and not enough energy. Routing protocols that were made for land-based networks, like RPL, don't work well in these underwater settings. This paper talks about RL-RPL-UA, a new routing protocol that uses reinforcement learning to make things work better in underwater situations. Each node has a small RL agent that picks the best parent node depending on local data such the link quality, buffer level, packet delivery ratio, and remaining energy. RL-RPL-UA works with all standard RPL messages and adds a dynamic objective function to help people make decisions in real time. Aqua-Sim simulations demonstrate that RL-RPL-UA boosts packet delivery by up to 9.2%, uses 14.8% less energy per packet, and adds 80 seconds to the network's lifetime compared to previous approaches. These results show that RL-RPL-UA is a potential and energy-efficient way to route data in underwater networks.
- Europe > Spain > Extremadura (0.04)
- Europe > Italy (0.04)
- Telecommunications > Networks (1.00)
- Energy (1.00)
Exploring Spiking Neural Networks for Binary Classification in Multivariate Time Series at the Edge
Ghawaly, James, Nicholson, Andrew, Schuman, Catherine, Diez, Dalton, Young, Aaron, Witherspoon, Brett
We present a general framework for training spiking neural networks (SNNs) to perform binary classification on multivariate time series, with a focus on step-wise prediction and high precision at low false alarm rates. The approach uses the Evolutionary Optimization of Neuromorphic Systems (EONS) algorithm to evolve sparse, stateful SNNs by jointly optimizing their architectures and parameters. Inputs are encoded into spike trains, and predictions are made by thresholding a single output neuron's spike counts. We also incorporate simple voting ensemble methods to improve performance and robustness. To evaluate the framework, we apply it with application-specific optimizations to the task of detecting low signal-to-noise ratio radioactive sources in gamma-ray spectral data. The resulting SNNs, with as few as 49 neurons and 66 synapses, achieve a 51.8% true positive rate (TPR) at a false alarm rate of 1/hr, outperforming PCA (42.7%) and deep learning (49.8%) baselines. A three-model any-vote ensemble increases TPR to 67.1% at the same false alarm rate. Hardware deployment on the microCaspian neuromorphic platform demonstrates 2mW power consumption and 20.2ms inference latency. We also demonstrate generalizability by applying the same framework, without domain-specific modification, to seizure detection in EEG recordings. An ensemble achieves 95% TPR with a 16% false positive rate, comparable to recent deep learning approaches with significant reduction in parameter count.
- North America > United States > Louisiana > East Baton Rouge Parish > Baton Rouge (0.14)
- North America > United States > Tennessee > Knox County > Knoxville (0.14)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- Energy (1.00)
- Government > Regional Government (0.68)
- Telecommunications > Networks (0.46)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Europe > Denmark > Capital Region > Kongens Lyngby (0.04)
- Telecommunications > Networks (0.34)
- Information Technology > Networks (0.34)