Goto

Collaborating Authors

 Telecommunications


AutoBS: Autonomous Base Station Deployment Framework with Reinforcement Learning and Digital Twin Network

arXiv.org Artificial Intelligence

--This paper introduces AutoBS, a reinforcement learning (RL)-based framework for optimal base station (BS) deployment in 6G networks. AutoBS leverages the Proximal Policy Optimization (PPO) algorithm and fast, site-specific pathloss predictions from PMNet to efficiently learn deployment strategies that balance coverage and capacity. Numerical results demonstrate that AutoBS achieves 95% for a single BS, and 90% for multiple BSs, of the capacity provided by exhaustive search methods while reducing inference time from hours to milliseconds, making it highly suitable for real-time applications. AutoBS offers a scalable and automated solution for large-scale 6G networks, addressing the challenges of dynamic environments with minimal computational overhead. I NTRODUCTION The rollout of 6G networks demands higher base station (BS) density due to the use of higher frequencies like millimeter-wave (mmWave), which offers enhanced bandwidth and low latency. However, these frequencies suffer from severe signal attenuation and limited propagation range, particularly in complex urban environments. As a result, dense BS deployment becomes essential to maintain reliable coverage and capacity.


Provable Performance Bounds for Digital Twin-driven Deep Reinforcement Learning in Wireless Networks: A Novel Digital-Twin Bisimulation Metric

arXiv.org Artificial Intelligence

--Digital twin (DT)-driven deep reinforcement learning (DRL) has emerged as a promising paradigm for wireless network optimization, offering safe and efficient training environment for policy exploration. However, in theory existing methods cannot always guarantee real-world performance of DT - trained policies before actual deployment, due to the absence of a universal metric for assessing DT's ability to support reliable DRL training transferrable to physical networks. In this paper, we propose the DT bisimulation metric (DT -BSM), a novel metric based on the Wasserstein distance, to quantify the discrepancy between Markov decision processes (MDPs) in both the DT and the corresponding real-world wireless network environment. We prove that for any DT -trained policy, the sub-optimality of its performance (regret) in the real-world deployment is bounded by a weighted sum of the DT -BSM and its sub-optimality within the MDP in the DT . Then, a modified DT -BSM based on the total variation distance is also introduced to avoid the prohibitive calculation complexity of Wasserstein distance for large-scale wireless network scenarios. Further, to tackle the challenge of obtaining accurate transition probabilities of the MDP in real world for the DT -BSM calculation, we propose an empirical DT - BSM method based on statistical sampling. We prove that the empirical DT -BSM always converges to the desired theoretical one, and quantitatively establish the relationship between the required sample size and the target level of approximation accuracy. Index T erms --Digital twin, Markov decision process (MDP), deep reinforcement learning (DRL), transfer learning, bisimula-tion metric. HE long-term evolution of cellular networks, marked by growing scale, density, and heterogeneity, substantially increases the difficulty of wireless network optimization [1]. Deep reinforcement learning (DRL) emerges as a promising solution for tackling extensive state and action spaces and nonconvex optimization problems. It has been successfully applied to various network optimization tasks, such as admission control [2], resource allocation [3], node selection [4], and task offloading [5] in wireless networks. Z. Tao, W . Xu, and X. Y ou are with the National Mobile Communications Research Lab, Southeast University, Nanjing 210096, China, and also with the Pervasive Communication Research Center, Purple Mountain Laboratories, Nanjing 211111, China (email: {zhenyu tao, wxu, xhyu }@seu.edu.cn). To overcome these issues, the concept of digital twin (DT) has been introduced [7].


Evaluation of Missing Data Imputation for Time Series Without Ground Truth

arXiv.org Machine Learning

--The challenge of handling missing data in time series is critical for maintaining the accuracy and reliability of machine learning (ML) models in applications like fifth generation mobile communication (5G) network management. Traditional methods for validating imputation rely on ground truth data, which is inherently unavailable. This paper addresses this limitation by introducing two statistical metrics, the wasserstein distance (WD) and jensen-shannon divergence (JSD), to evaluate imputation quality without requiring ground truth. These metrics assess the alignment between the distributions of imputed and original data, providing a robust method for evaluating imputation performance based on internal structure and data consistency. We apply and test these metrics across several imputation techniques. Results demonstrate that WD and JSD are effective metrics for assessing the quality of missing data imputation, particularly in scenarios where ground truth data is unavailable.


Enhancing 5G O-RAN Communication Efficiency Through AI-Based Latency Forecasting

arXiv.org Artificial Intelligence

--The increasing complexity and dynamic nature of 5G open radio access networks (O-RAN) pose significant challenges to maintaining low latency, high throughput, and resource efficiency. While existing methods leverage machine learning for latency prediction and resource management, they often lack real-world scalability and hardware validation. This paper addresses these limitations by presenting an artificial intelligence-driven latency forecasting system integrated into a functional O-RAN prototype. The system uses a bidirectional long short-term memory model to predict latency in real time within a scalable, open-source framework built with FlexRIC. Experimental results demonstrate the model's efficacy, achieving a loss metric below 0.04, thus validating its applicability in dynamic 5G environments.


Atten-Transformer: A Deep Learning Framework for User App Usage Prediction

arXiv.org Artificial Intelligence

Accurately predicting smartphone app usage patterns is crucial for user experience optimization and targeted marketing. However, existing methods struggle to capture intricate dependencies in user behavior, particularly in sparse or complex usage scenarios. To address these challenges, we introduce Atten-Transformer, a novel model that integrates temporal attention with a Transformer network to dynamically identify and leverage key app usage patterns. Unlike conventional methods that primarily consider app order and duration, our approach employs a multi-dimensional feature representation, incorporating both feature encoding and temporal encoding to enhance predictive accuracy. The proposed attention mechanism effectively assigns importance to critical app usage moments, improving both model interpretability and generalization. Extensive experiments on multiple smartphone usage datasets, including LSapp and Tsinghua App Usage datasets, demonstrate that Atten-Transformer consistently outperforms state-of-the-art models across different data splits. Specifically, our model achieves a 45.24\% improvement in HR@1 on the Tsinghua dataset (Time-based Split) and a 18.25\% improvement in HR@1 on the LSapp dataset (Cold Start Split), showcasing its robustness across diverse app usage scenarios. These findings highlight the potential of integrating adaptive attention mechanisms in mobile usage forecasting, paving the way for enhanced user engagement and resource allocation.


Toward Agentic AI: Generative Information Retrieval Inspired Intelligent Communications and Networking

arXiv.org Artificial Intelligence

The increasing complexity and scale of modern telecommunications networks demand intelligent automation to enhance efficiency, adaptability, and resilience. Agentic AI has emerged as a key paradigm for intelligent communications and networking, enabling AI-driven agents to perceive, reason, decide, and act within dynamic networking environments. However, effective decision-making in telecom applications, such as network planning, management, and resource allocation, requires integrating retrieval mechanisms that support multi-hop reasoning, historical cross-referencing, and compliance with evolving 3GPP standards. This article presents a forward-looking perspective on generative information retrieval-inspired intelligent communications and networking, emphasizing the role of knowledge acquisition, processing, and retrieval in agentic AI for telecom systems. We first provide a comprehensive review of generative information retrieval strategies, including traditional retrieval, hybrid retrieval, semantic retrieval, knowledge-based retrieval, and agentic contextual retrieval. We then analyze their advantages, limitations, and suitability for various networking scenarios. Next, we present a survey about their applications in communications and networking. Additionally, we introduce an agentic contextual retrieval framework to enhance telecom-specific planning by integrating multi-source retrieval, structured reasoning, and self-reflective validation. Experimental results demonstrate that our framework significantly improves answer accuracy, explanation consistency, and retrieval efficiency compared to traditional and semantic retrieval methods. Finally, we outline future research directions.


Using Machine Learning to Detect Fraudulent SMSs in Chichewa

arXiv.org Artificial Intelligence

SMS enabled fraud is of great concern globally. Building classifiers based on machine learning for SMS fraud requires the use of suitable datasets for model training and validation. Most research has centred on the use of datasets of SMSs in English. This paper introduces a first dataset for SMS fraud detection in Chichewa, a major language in Africa, and reports on experiments with machine learning algorithms for classifying SMSs in Chichewa as fraud or non-fraud. We answer the broader research question of how feasible it is to develop machine learning classification models for Chichewa SMSs. To do that, we created three datasets. A small dataset of SMS in Chichewa was collected through primary research from a segment of the young population. We applied a label-preserving text transformations to increase its size. The enlarged dataset was translated into English using two approaches: human translation and machine translation. The Chichewa and the translated datasets were subjected to machine classification using random forest and logistic regression. Our findings indicate that both models achieved a promising accuracy of over 96% on the Chichewa dataset. There was a drop in performance when moving from the Chichewa to the translated dataset. This highlights the importance of data preprocessing, especially in multilingual or cross-lingual NLP tasks, and shows the challenges of relying on machine-translated text for training machine learning models. Our results underscore the importance of developing language specific models for SMS fraud detection to optimise accuracy and performance. Since most machine learning models require data preprocessing, it is essential to investigate the impact of the reliance on English-specific tools for data preprocessing.


The Robustness of Structural Features in Species Interaction Networks

arXiv.org Artificial Intelligence

Species interaction networks are a powerful tool for describing ecological communities; they typically contain nodes representing species, and edges representing interactions between those species. For the purposes of drawing abstract inferences about groups of similar networks, ecologists often use graph topology metrics to summarize structural features. However, gathering the data that underlies these networks is challenging, which can lead to some interactions being missed. Thus, it is important to understand how much different structural metrics are affected by missing data. To address this question, we analyzed a database of 148 real-world bipartite networks representing four different types of species interactions (pollination, host-parasite, plant-ant, and seed-dispersal). For each network, we measured six different topological properties: number of connected components, variance in node betweenness, variance in node PageRank, largest Eigenvalue, the number of non-zero Eigenvalues, and community detection as determined by four different algorithms. We then tested how these properties change as additional edges -- representing data that may have been missed -- are added to the networks. We found substantial variation in how robust different properties were to the missing data. For example, the Clauset-Newman-Moore and Louvain community detection algorithms showed much more gradual change as edges were added than the label propagation and Girvan-Newman algorithms did, suggesting that the former are more robust. Robustness also varied for some metrics based on interaction type. These results provide a foundation for selecting network properties to use when analyzing messy ecological network data.


Software defined demodulation of multiple frequency shift keying with dense neural network for weak signal communications

arXiv.org Artificial Intelligence

In this paper we present the symbol and bit error rate performance of the weak signal digital communications system. We investigate orthogonal multiple frequency shift keying modulation scheme with supervised machine learning demodulation approach using simple dense end-to-end artificial neural network. We focus on the interference immunity over an additive white Gaussian noise with average signal-to-noise ratios from -20 dB to 0 dB.


Space-O-RAN: Enabling Intelligent, Open, and Interoperable Non Terrestrial Networks in 6G

arXiv.org Artificial Intelligence

Non-terrestrial networks (NTNs) are essential for ubiquitous connectivity, providing coverage in remote and underserved areas. However, since NTNs are currently operated independently, they face challenges such as isolation, limited scalability, and high operational costs. Integrating satellite constellations with terrestrial networks offers a way to address these limitations while enabling adaptive and cost-efficient connectivity through the application of Artificial Intelligence (AI) models. This paper introduces Space-O-RAN, a framework that extends Open Radio Access Network (RAN) principles to NTNs. It employs hierarchical closed-loop control with distributed Space RAN Intelligent Controllers (Space-RICs) to dynamically manage and optimize operations across both domains. To enable adaptive resource allocation and network orchestration, the proposed architecture integrates real-time satellite optimization and control with AI-driven management and digital twin (DT) modeling. It incorporates distributed Space Applications (sApps) and dApps to ensure robust performance in in highly dynamic orbital environments. A core feature is dynamic link-interface mapping, which allows network functions to adapt to specific application requirements and changing link conditions using all physical links on the satellite. Simulation results evaluate its feasibility by analyzing latency constraints across different NTN link types, demonstrating that intra-cluster coordination operates within viable signaling delay bounds, while offloading non-real-time tasks to ground infrastructure enhances scalability toward sixth-generation (6G) networks.