Goto

Collaborating Authors

 Costa-Perez, Xavier


Energy-aware Joint Orchestration of 5G and Robots: Experimental Testbed and Field Validation

arXiv.org Artificial Intelligence

5G mobile networks introduce a new dimension for connecting and operating mobile robots in outdoor environments, leveraging cloud-native and offloading features of 5G networks to enable fully flexible and collaborative cloud robot operations. However, the limited battery life of robots remains a significant obstacle to their effective adoption in real-world exploration scenarios. This paper explores, via field experiments, the potential energy-saving gains of OROS, a joint orchestration of 5G and Robot Operating System (ROS) that coordinates multiple 5G-connected robots both in terms of navigation and sensing, as well as optimizes their cloud-native service resource utilization while minimizing total resource and energy consumption on the robots based on real-time feedback. We designed, implemented and evaluated our proposed OROS in an experimental testbed composed of commercial off-the-shelf robots and a local 5G infrastructure deployed on a campus. The experimental results demonstrated that OROS significantly outperforms state-of-the-art approaches in terms of energy savings by offloading demanding computational tasks to the 5G edge infrastructure and dynamic energy management of on-board sensors (e.g., switching them off when they are not needed). This strategy achieves approximately 15% energy savings on the robots, thereby extending battery life, which in turn allows for longer operating times and better resource utilization.


TelecomRAG: Taming Telecom Standards with Retrieval Augmented Generation and LLMs

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have immense potential to transform the telecommunications industry. They could help professionals understand complex standards, generate code, and accelerate development. However, traditional LLMs struggle with the precision and source verification essential for telecom work. To address this, specialized LLM-based solutions tailored to telecommunication standards are needed. Retrieval-augmented generation (RAG) offers a way to create precise, fact-based answers. This paper proposes TelecomRAG, a framework for a Telecommunication Standards Assistant that provides accurate, detailed, and verifiable responses. Our implementation, using a knowledge base built from 3GPP Release 16 and Release 18 specification documents, demonstrates how this assistant surpasses generic LLMs, offering superior accuracy, technical depth, and verifiability, and thus significant value to the telecommunications field.


Are you a robot? Detecting Autonomous Vehicles from Behavior Analysis

arXiv.org Artificial Intelligence

The tremendous hype around autonomous driving is eagerly calling for emerging and novel technologies to support advanced mobility use cases. As car manufactures keep developing SAE level 3+ systems to improve the safety and comfort of passengers, traffic authorities need to establish new procedures to manage the transition from human-driven to fully-autonomous vehicles while providing a feedback-loop mechanism to fine-tune envisioned autonomous systems. Thus, a way to automatically profile autonomous vehicles and differentiate those from human-driven ones is a must. In this paper, we present a fully-fledged framework that monitors active vehicles using camera images and state information in order to determine whether vehicles are autonomous, without requiring any active notification from the vehicles themselves. Essentially, it builds on the cooperation among vehicles, which share their data acquired on the road feeding a machine learning model to identify autonomous cars. We extensively tested our solution and created the NexusStreet dataset, by means of the CARLA simulator, employing an autonomous driving control agent and a steering wheel maneuvered by licensed drivers. Experiments show it is possible to discriminate the two behaviors by analyzing video clips with an accuracy of 80%, which improves up to 93% when the target state information is available. Lastly, we deliberately degraded the state to observe how the framework performs under non-ideal data collection conditions.


Analytical Modelling of Raw Data for Flow-Guided In-body Nanoscale Localization

arXiv.org Artificial Intelligence

Advancements in nanotechnology and material science are paving the way toward nanoscale devices that combine sensing, computing, data and energy storage, and wireless communication. In precision medicine, these nanodevices show promise for disease diagnostics, treatment, and monitoring from within the patients' bloodstreams. Assigning the location of a sensed biological event with the event itself, which is the main proposition of flow-guided in-body nanoscale localization, would be immensely beneficial from the perspective of precision medicine. The nanoscale nature of the nanodevices and the challenging environment that the bloodstream represents, result in current flow-guided localization approaches being constrained in their communication and energy-related capabilities. The communication and energy constraints of the nanodevices result in different features of raw data for flow-guided localization, in turn affecting its performance. An analytical modeling of the effects of imperfect communication and constrained energy causing intermittent operation of the nanodevices on the raw data produced by the nanodevices would be beneficial. Hence, we propose an analytical model of raw data for flow-guided localization, where the raw data is modeled as a function of communication and energy-related capabilities of the nanodevice. We evaluate the model by comparing its output with the one obtained through the utilization of a simulator for objective evaluation of flow-guided localization, featuring comparably higher level of realism. Our results across a number of scenarios and heterogeneous performance metrics indicate high similarity between the model and simulator-generated raw datasets.


Risk-Aware Continuous Control with Neural Contextual Bandits

arXiv.org Machine Learning

Recent advances in learning techniques have garnered attention for their applicability to a diverse range of real-world sequential decision-making problems. Yet, many practical applications have critical constraints for operation in real environments. Most learning solutions often neglect the risk of failing to meet these constraints, hindering their implementation in real-world contexts. In this paper, we propose a risk-aware decision-making framework for contextual bandit problems, accommodating constraints and continuous action spaces. Our approach employs an actor multi-critic architecture, with each critic characterizing the distribution of performance and constraint metrics. Our framework is designed to cater to various risk levels, effectively balancing constraint satisfaction against performance. To demonstrate the effectiveness of our approach, we first compare it against state-of-the-art baseline methods in a synthetic environment, highlighting the impact of intrinsic environmental noise across different risk configurations. Finally, we evaluate our framework in a real-world use case involving a 5G mobile network where only our approach consistently satisfies the system constraint (a signal processing reliability target) with a small performance toll (8.5% increase in power consumption).


LACO: A Latency-Driven Network Slicing Orchestration in Beyond-5G Networks

arXiv.org Machine Learning

Network Slicing is expected to become a game changer in the upcoming 5G networks and beyond, enlarging the telecom business ecosystem through still-unexplored vertical industry profits. This implies that heterogeneous service level agreements (SLAs) must be guaranteed per slice given the multitude of predefined requirements. In this paper, we pioneer a novel radio slicing orchestration solution that simultaneously provides latency and throughput guarantees in a multi-tenancy environment. Leveraging on a solid mathematical framework, we exploit the exploration-vs-exploitation paradigm by means of a multi-armed-bandit-based (MAB) orchestrator, LACO, that makes adaptive resource slicing decisions with no prior knowledge on the traffic demand or channel quality statistics. As opposed to traditional MAB methods that are blind to the underlying system, LACO relies on system structure information to expedite decisions. After a preliminary simulations campaign empirically proving the validness of our solution, we provide a robust implementation of LACO using off-the-shelf equipment to fully emulate realistic network conditions: near-optimal results within affordable computational time are measured when LACO is in place. L. Zanzi, V. Sciancalepore, A. Garcia-Saavedra and X. Costa-Pérez are with NEC Laboratories Europe GmbH., 69115 Heidelberg, Germany. The quest for new sources of revenue that revitalizes the mobile industry has spawned an unprecedented hype around the fifth-generation of mobile networks (5G) and, in particular, the network slicing concept. A high-level view of the system considered in this paper is described in Figure 1. The figure represents a series of sliceable base stations as a pool of radio resources (coloured cubes in the figure). The resource allocation process is considered hierarchical: while bundles of radio resources are assigned to different tenants (namely radio slices), each tenant autonomously schedules its bundle of radio resources to each individual user following classic radio scheduling policies. The difference between such operations is subtle but of paramount importance: a slice controller operates at a larger timescale and thus over a coarser granularity [2], [3]. While most prior work on network slicing focuses on average bit-rate guarantees [3], [4], latency considerations have received little attention.