Telecommunications
Predicting Drive Test Results in Mobile Networks Using Optimization Techniques
Taheri, MohammadJava, Diyanat, Abolfazl, Ahmadi, MortezaAli, Nazari, Ali
Mobile network operators constantly optimize their networks to ensure superior service quality and coverage. This optimization is crucial for maintaining an optimal user experience and requires extensive data collection and analysis. One of the primary methods for gathering this data is through drive tests, where technical teams use specialized equipment to collect signal information across various regions. However, drive tests are both costly and time-consuming, and they face challenges such as traffic conditions, environmental factors, and limited access to certain areas. These constraints make it difficult to replicate drive tests under similar conditions. In this study, we propose a method that enables operators to predict received signal strength at specific locations using data from other drive test points. By reducing the need for widespread drive tests, this approach allows operators to save time and resources while still obtaining the necessary data to optimize their networks and mitigate the challenges associated with traditional drive tests.
Application of Tabular Transformer Architectures for Operating System Fingerprinting
Pérez-Jove, Rubén, Munteanu, Cristian R., Pazos, Alejandro, Vázquez-Naya, Jose
Operating System (OS) fingerprinting is essential for network management and cybersecurity, enabling accurate device identification based on network traffic analysis. Traditional rule-based tools such as Nmap and p0f face challenges in dynamic environments due to frequent OS updates and obfuscation techniques. While Machine Learning (ML) approaches have been explored, Deep Learning (DL) models, particularly Transformer architectures, remain unexploited in this domain. This study investigates the application of Tabular Transformer architectures-specifically TabTransformer and FT-Transformer-for OS fingerprinting, leveraging structured network data from three publicly available datasets. Our experiments demonstrate that FT-Transformer generally outperforms traditional ML models, previous approaches and TabTransformer across multiple classification levels (OS family, major, and minor versions). The results establish a strong foundation for DL-based OS fingerprinting, improving accuracy and adaptability in complex network environments. Furthermore, we ensure the reproducibility of our research by providing an open-source implementation.
Digital Access Is Critical for Society Say Industry Leaders
Improving connectivity can both benefit those who most need it most and boost the businesses that provide the service. That's the case telecom industry leaders made during a panel on Feb. 11 at the World Governments Summit in Dubai. Titled "Can we innovate our way to a more connected world?", the panel was hosted by TIME's Editor-in-Chief Sam Jacobs. During the course of the conversation, Margherita Della Valle, CEO of U.K.-based multinational telecom company Vodafone Group, said, "For society today, connectivity is essential. We are moving from the old divide in the world between the haves and the have-nots towards a new divide, which is between those who have access to connectivity and those who don't."
Qualcomm's Snapdragon 6 Gen 4 is its first mid-range chip with AI support
Qualcomm is bringing AI to its mid-range mobile chip lineup with the Snapdragon 6 Gen 4 Mobile Platform, the company announced. The new chips also promise improved CPU and GPU performance, lower power requirements and faster Wi-Fi and mobile connectivity compared to the previous chip. The new AI features are made possible with support for Qualcom's on-device Gen AI support, allowing voice-activated assistants, background noise reduction during calls and more. It's also the first 6-series Snapdragon processor with support for INT4 that allows generative AI to run more efficiently on small devices. Qualcomm is also promising 11 percent improved CPU performance via its latest Kryo CPU and a 29 percent boost in GPU performance.
SoftBank swings to a loss ahead of big Stargate AI bet
SoftBank Group swung to a loss in the December quarter due to a drop in the value of its Vision Fund unit's public holdings, boding ill for founder Masayoshi Son who has to raise 500 billion for the Stargate artificial intelligence project. The Tokyo-based company reported a net loss of 369.2 billion ( 2.4 billion) for the fiscal third quarter compared with a profit of 950 billion a year earlier. The Vision Fund unit logged a 309.9 billion loss, hurting the bottom line after shares of public holdings such as Coupang and Didi Global gave up some of their gains from the previous quarter. Volatility in the Vision Fund's quarterly performance consistently dogs SoftBank, which has embarked on a project with OpenAI to invest 500 billion on the infrastructure needed to support and propel AI development. Japanese billionaire Son is exploring project financing to raise money.
A Low-Complexity Plug-and-Play Deep Learning Model for Massive MIMO Precoding Across Sites
Karkan, Ali Hasanzadeh, Ibrahim, Ahmed, Frigon, Jean-François, Leduc-Primeau, François
Massive multiple-input multiple-output (mMIMO) technology has transformed wireless communication by enhancing spectral efficiency and network capacity. This paper proposes a novel deep learning-based mMIMO precoder to tackle the complexity challenges of existing approaches, such as weighted minimum mean square error (WMMSE), while leveraging meta-learning domain generalization and a teacher-student architecture to improve generalization across diverse communication environments. When deployed to a previously unseen site, the proposed model achieves excellent sum-rate performance while maintaining low computational complexity by avoiding matrix inversions and by using a simpler neural network structure. The model is trained and tested on a custom ray-tracing dataset composed of several base station locations. The experimental results indicate that our method effectively balances computational efficiency with high sum-rate performance while showcasing strong generalization performance in unseen environments. Furthermore, with fine-tuning, the proposed model outperforms WMMSE across all tested sites and SNR conditions while reducing complexity by at least 73$\times$.
Joint Transmit and Pinching Beamforming for PASS: Optimization-Based or Learning-Based?
Xu, Xiaoxia, Mu, Xidong, Liu, Yuanwei, Nallanathan, Arumugam
A novel pinching antenna system (PASS)-enabled downlink multi-user multiple-input single-output (MISO) framework is proposed. PASS consists of multiple waveguides spanning over thousands of wavelength, which equip numerous low-cost dielectric particles, named pinching antennas (PAs), to radiate signals into free space. The positions of PAs can be reconfigured to change both the large-scale path losses and phases of signals, thus facilitating the novel pinching beamforming design. A sum rate maximization problem is formulated, which jointly optimizes the transmit and pinching beamforming to adaptively achieve constructive signal enhancement and destructive interference mitigation. To solve this highly coupled and nonconvex problem, both optimization-based and learning-based methods are proposed. 1) For the optimization-based method, a majorization-minimization and penalty dual decomposition (MM-PDD) algorithm is developed, which handles the nonconvex complex exponential component using a Lipschitz surrogate function and then invokes PDD for problem decoupling. 2) For the learning-based method, a novel Karush-Kuhn-Tucker (KKT)-guided dual learning (KDL) approach is proposed, which enables KKT solutions to be reconstructed in a data-driven manner by learning dual variables. Following this idea, a KDL-Tranformer algorithm is developed, which captures both inter-PA/inter-user dependencies and channel-state-information (CSI)-beamforming dependencies by attention mechanisms. Simulation results demonstrate that: i) The proposed PASS framework significantly outperforms conventional massive multiple input multiple output (MIMO) system even with a few PAs. ii) The proposed KDL-Transformer can improve over 30% system performance than MM-PDD algorithm, while achieving a millisecond-level response on modern GPUs.
Mapping the Landscape of Generative AI in Network Monitoring and Management
Bovenzi, Giampaolo, Cerasuolo, Francesco, Ciuonzo, Domenico, Di Monda, Davide, Guarino, Idio, Montieri, Antonio, Persico, Valerio, Pescapè, Antonio
Generative Artificial Intelligence (GenAI) models such as LLMs, GPTs, and Diffusion Models have recently gained widespread attention from both the research and the industrial communities. This survey explores their application in network monitoring and management, focusing on prominent use cases, as well as challenges and opportunities. We discuss how network traffic generation and classification, network intrusion detection, networked system log analysis, and network digital assistance can benefit from the use of GenAI models. Additionally, we provide an overview of the available GenAI models, datasets for large-scale training phases, and platforms for the development of such models. Finally, we discuss research directions that potentially mitigate the roadblocks to the adoption of GenAI for network monitoring and management. Our investigation aims to map the current landscape and pave the way for future research in leveraging GenAI for network monitoring and management.
Learning Invariant Representations of Graph Neural Networks via Cluster Generalization Beijing University of Posts and Telecommunications
Graph neural networks (GNNs) have become increasingly popular in modeling graph-structured data due to their ability to learn node representations by aggregating local structure information. However, it is widely acknowledged that the test graph structure may differ from the training graph structure, resulting in a structure shift. In this paper, we experimentally find that the performance of GNNs drops significantly when the structure shift happens, suggesting that the learned models may be biased towards specific structure patterns.
Efficient Robust Bayesian Optimization for Arbitrary Uncertain inputs Junlong Lyu Huawei Noah's Ark Lab Huawei Noah's Ark Lab China
Bayesian Optimization (BO) is a sample-efficient optimization algorithm widely employed across various applications. In some challenging BO tasks, input uncertainty arises due to the inevitable randomness in the optimization process, such as machining errors, execution noise, or contextual variability. This uncertainty deviates the input from the intended value before evaluation, resulting in significant performance fluctuations in final result. In this paper, we introduce a novel robust Bayesian Optimization algorithm, AIRBO, which can effectively identify a robust optimum that performs consistently well under arbitrary input uncertainty. Our method directly models the uncertain inputs of arbitrary distributions by empowering the Gaussian Process with the Maximum Mean Discrepancy (MMD) and further accelerates the posterior inference via Nyström approximation. Rigorous theoretical regret bound is established under MMD estimation error and extensive experiments on synthetic functions and real problems demonstrate that our approach can handle various input uncertainties and achieve a state-of-the-art performance.