Goto

Collaborating Authors

 Communications: Overviews


AI-Powered Urban Transportation Digital Twin: Methods and Applications

arXiv.org Artificial Intelligence

We present a survey paper on methods and applications of digital twins (DT) for urban traffic management. While the majority of studies on the DT focus on its "eyes," which is the emerging sensing and perception like object detection and tracking, what really distinguishes the DT from a traditional simulator lies in its ``brain," the prediction and decision making capabilities of extracting patterns and making informed decisions from what has been seen and perceived. In order to add values to urban transportation management, DTs need to be powered by artificial intelligence and complement with low-latency high-bandwidth sensing and networking technologies. We will first review the DT pipeline leveraging cyberphysical systems and propose our DT architecture deployed on a real-world testbed in New York City. This survey paper can be a pointer to help researchers and practitioners identify challenges and opportunities for the development of DTs; a bridge to initiate conversations across disciplines; and a road map to exploiting potentials of DTs for diverse urban transportation applications.


How To Think About End-To-End Encryption and AI: Training, Processing, Disclosure, and Consent

arXiv.org Artificial Intelligence

End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns. This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI "assistants" within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features.


Global SLAM in Visual-Inertial Systems with 5G Time-of-Arrival Integration

arXiv.org Artificial Intelligence

This paper presents a novel approach to improve global localization and mapping in indoor drone navigation by integrating 5G Time of Arrival (ToA) measurements into ORB-SLAM3, a Simultaneous Localization and Mapping (SLAM) system. By incorporating ToA data from 5G base stations, we align the SLAM's local reference frame with a global coordinate system, enabling accurate and consistent global localization. We extend ORB-SLAM3's optimization pipeline to integrate ToA measurements alongside bias estimation, transforming the inherently local estimation into a globally consistent one. This integration effectively resolves scale ambiguity in monocular SLAM systems and enhances robustness, particularly in challenging scenarios where standard SLAM may fail. Our method is evaluated using five real-world indoor datasets collected with RGB-D cameras and inertial measurement units (IMUs), augmented with simulated 5G ToA measurements at 28 GHz and 78 GHz frequencies using MATLAB and QuaDRiGa. We tested four SLAM configurations: RGB-D, RGB-D-Inertial, Monocular, and Monocular-Inertial. The results demonstrate that while local estimation accuracy remains comparable due to the high precision of RGB-D-based ORB-SLAM3 compared to ToA measurements, the inclusion of ToA measurements facilitates robust global positioning. In scenarios where standard mono-inertial ORB-SLAM3 loses tracking, our approach maintains accurate localization throughout the trajectory.


A survey on pioneering metaheuristic algorithms between 2019 and 2024

arXiv.org Artificial Intelligence

With innovation accelerating, selecting the most effective algorithms has become increasingly demanding for researchers and practitioners alike. Recognizing this, we conducted an in-depth review of metaheuristics introduced in the past six years, focusing on their influence and effectiveness. We evaluated these algorithms across essential criteria: citation frequency, diversity in tackled problem types, code availability, ease of parameter tuning, introduction of novel mechanisms, and resilience to issues like stagnation and early convergence. Out of 158 algorithms, we identified 23 that set themselves apart, each contributing unique solutions to long-standing optimization challenges. These algorithms stand out for their versatility and innovation, positioning them as valuable assets for advancing research and addressing complex real-world problems. Our review offers a detailed analysis of these algorithms, comparing their strengths, limitations, similarities, and applications, while highlighting promising trends and future pathways in metaheuristic research.


SoK: On the Offensive Potential of AI

arXiv.org Artificial Intelligence

Our society increasingly benefits from Artificial Intelligence (AI). Unfortunately, more and more evidence shows that AI is also used for offensive purposes. Prior works have revealed various examples of use cases in which the deployment of AI can lead to violation of security and privacy objectives. No extant work, however, has been able to draw a holistic picture of the offensive potential of AI. In this SoK paper we seek to lay the ground for a systematic analysis of the heterogeneous capabilities of offensive AI. In particular we (i) account for AI risks to both humans and systems while (ii) consolidating and distilling knowledge from academic literature, expert opinions, industrial venues, as well as laypeople -- all of which being valuable sources of information on offensive AI. To enable alignment of such diverse sources of knowledge, we devise a common set of criteria reflecting essential technological factors related to offensive AI. With the help of such criteria, we systematically analyze: 95 research papers; 38 InfoSec briefings (from, e.g., BlackHat); the responses of a user study (N=549) entailing individuals with diverse backgrounds and expertise; and the opinion of 12 experts. Our contributions not only reveal concerning ways (some of which overlooked by prior work) in which AI can be offensively used today, but also represent a foothold to address this threat in the years to come.


GDM4MMIMO: Generative Diffusion Models for Massive MIMO Communications

arXiv.org Artificial Intelligence

Massive multiple-input multiple-output (MIMO) offers significant advantages in spectral and energy efficiencies, positioning it as a cornerstone technology of fifth-generation (5G) wireless communication systems and a promising solution for the burgeoning data demands anticipated in sixth-generation (6G) networks. In recent years, with the continuous advancement of artificial intelligence (AI), a multitude of task-oriented generative foundation models (GFMs) have emerged, achieving remarkable performance in various fields such as computer vision (CV), natural language processing (NLP), and autonomous driving. As a pioneering force, these models are driving the paradigm shift in AI towards generative AI (GenAI). Among them, the generative diffusion model (GDM), as one of state-of-the-art families of generative models, demonstrates an exceptional capability to learn implicit prior knowledge and robust generalization capabilities, thereby enhancing its versatility and effectiveness across diverse applications. In this paper, we delve into the potential applications of GDM in massive MIMO communications. Specifically, we first provide an overview of massive MIMO communication, the framework of GFMs, and the working mechanism of GDM. Following this, we discuss recent research advancements in the field and present a case study of near-field channel estimation based on GDM, demonstrating its promising potential for facilitating efficient ultra-dimensional channel statement information (CSI) acquisition in the context of massive MIMO communications. Finally, we highlight several pressing challenges in future mobile communications and identify promising research directions surrounding GDM.


Exploring Graph Mamba: A Comprehensive Survey on State-Space Models for Graph Learning

arXiv.org Artificial Intelligence

Graph Mamba, a powerful graph embedding technique, has emerged as a cornerstone in various domains, including bioinformatics, social networks, and recommendation systems. This survey represents the first comprehensive study devoted to Graph Mamba, to address the critical gaps in understanding its applications, challenges, and future potential. We start by offering a detailed explanation of the original Graph Mamba architecture, highlighting its key components and underlying mechanisms. Subsequently, we explore the most recent modifications and enhancements proposed to improve its performance and applicability. To demonstrate the versatility of Graph Mamba, we examine its applications across diverse domains. A comparative analysis of Graph Mamba and its variants is conducted to shed light on their unique characteristics and potential use cases. Furthermore, we identify potential areas where Graph Mamba can be applied in the future, highlighting its potential to revolutionize data analysis in these fields. Finally, we address the current limitations and open research questions associated with Graph Mamba. By acknowledging these challenges, we aim to stimulate further research and development in this promising area. This survey serves as a valuable resource for both newcomers and experienced researchers seeking to understand and leverage the power of Graph Mamba.


CAG: Chunked Augmented Generation for Google Chrome's Built-in Gemini Nano

arXiv.org Artificial Intelligence

Integrating Gemini Nano into Google Chrome marks a revolutionary shift in browser capabilities, transforming it from a simple content delivery platform into an intelligent processing environment. This native AI integration addresses several longstanding challenges: it eliminates external API dependencies, enhances privacy through local processing, and democratizes AI access by making these capabilities available to all Chrome users without additional software or API requirements. However, browser-based AI models face a significant constraint in their limited context window size, which restricts their ability to process larger inputs like extensive documents or codebases. This limitation emerges from the necessary balance between model capability and browser performance constraints, potentially hindering real-world applications requiring substantial data processing. To address this challenge, we introduce Chunked Augmented Generation (CAG), an architectural framework specifically designed for Chrome's Gemini Nano implementation.


Towards Cognitive Service Delivery on B5G through AIaaS Architecture

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is pivotal in advancing mobile network systems by facilitating smart capabilities and automation. The transition from 4G to 5G has substantial implications for AI in consolidating a network predominantly geared towards business verticals. In this context, 3GPP has specified and introduced the Network Data Analytics Function (NWDAF) entity at the network's core to provide insights based on AI algorithms to benefit network orchestration. This paper proposes a framework for evolving NWDAF that presents the interfaces necessary to further empower the core network with AI capabilities B5G and 6G. In addition, we identify a set of research directions for realizing a distributed e-NWDAF.


Extending Graph Condensation to Multi-Label Datasets: A Benchmark Study

arXiv.org Artificial Intelligence

As graph data grows increasingly complicate, training graph neural networks (GNNs) on large-scale datasets presents significant challenges, including computational resource constraints, data redundancy, and transmission inefficiencies. While existing graph condensation techniques have shown promise in addressing these issues, they are predominantly designed for single-label datasets, where each node is associated with a single class label. However, many real-world applications, such as social network analysis and bioinformatics, involve multi-label graph datasets, where one node can have various related labels. To deal with this problem, we extends traditional graph condensation approaches to accommodate multi-label datasets by introducing modifications to synthetic dataset initialization and condensing optimization. Through experiments on eight real-world multi-label graph datasets, we prove the effectiveness of our method. In experiment, the GCond framework, combined with K-Center initialization and binary cross-entropy loss (BCELoss), achieves best performance in general. This benchmark for multi-label graph condensation not only enhances the scalability and efficiency of GNNs for multi-label graph data, but also offering substantial benefits for diverse real-world applications.