Goto

Collaborating Authors

 traffic intensity


Realistic Urban Traffic Generator using Decentralized Federated Learning for the SUMO simulator

Bazán-Guillén, Alberto, Beis-Penedo, Carlos, Cajaraville-Aboy, Diego, Barbecho-Bautista, Pablo, Díaz-Redondo, Rebeca P., Llopis, Luis J. de la Cruz, Fernández-Vilas, Ana, Igartua, Mónica Aguilar, Fernández-Veiga, Manuel

arXiv.org Artificial Intelligence

Realistic urban traffic simulation is essential for sustainable urban planning and the development of intelligent transportation systems. However, generating high-fidelity, time-varying traffic profiles that accurately reflect real-world conditions, especially in large-scale scenarios, remains a major challenge. Existing methods often suffer from limitations in accuracy, scalability, or raise privacy concerns due to centralized data processing. This work introduces DesRUTGe (Decentralized Realistic Urban Traffic Generator), a novel framework that integrates Deep Reinforcement Learning (DRL) agents with the SUMO simulator to generate realistic 24-hour traffic patterns. A key innovation of DesRUTGe is its use of Decentralized Federated Learning (DFL), wherein each traffic detector and its corresponding urban zone function as an independent learning node. These nodes train local DRL models using minimal historical data and collaboratively refine their performance by exchanging model parameters with selected peers (e.g., geographically adjacent zones), without requiring a central coordinator. Evaluated using real-world data from the city of Barcelona, DesRUTGe outperforms standard SUMO-based tools such as RouteSampler, as well as other centralized learning approaches, by delivering more accurate and privacy-preserving traffic pattern generation.


HEAT:History-Enhanced Dual-phase Actor-Critic Algorithm with A Shared Transformer

Yang, Hong

arXiv.org Artificial Intelligence

Although the LoRaW AN network can support a larger node scale than the LoRa private network, as the number of devices increases, the performance of the LoRaW AN network in terms of network congestion and energy consumption faces significant challenges. The limited spectrum resources and channel congestion will lead to a decrease in the communication efficiency of the netwo rk, which in turn affects the reliability of data transmission. How to achieve efficient and energy - saving resource allocation while ensuring network performance remains a key issue. In order to improve the overall performance of the LoRaW AN network, optim izing the transmission strategy parameters such as the spreading factor, transmit power, and receive window of the uplink and downlink is considered to be an effective means. By reasonably configuring these parameters, network conflicts can be effectively reduced, signal attenuation can be reduced, and signal coverage can be increased, thereby improving network reliability and communication quality. However, most of the existing optimization methods focus on the adjustment of the spreading factor and transm it power of the uplink, and rarely consider the impact of the downlink on network performance. To address this problem, this chapter proposes a History - E nhanced t wo - phase Actor - Critic a lgorithm with a s hared Transformer (HEA T), which aims to improve the resource allocation strategy of the LoRaW AN network and improve the overall performance of the network. This chapter conducts multiple sets of comparative experiments between HEA T and various popular methods under different device densities and traffic int ensities to verify the effectiveness of HEA T. 2 System Model and Problem Representation In order to efficiently verify the effectiveness of various LoRaW AN resource allocation strategies, this section describes and models the LoRa link behavior and the LoRaW AN standard in detail. Subsequently, this section proposes the target problem of LoRaW AN resource allocation and expresses the target problem as a Markov decision process.


Optimizing Multi-Gateway LoRaWAN via Cloud-Edge Collaboration and Knowledge Distillation

Yang, Hong

arXiv.org Artificial Intelligence

For large-scale multi-gateway LoRaWAN networks, this study proposes a cloud-edge collaborative resource allocation and decision-making method based on edge intelligence, HEAT-LDL (HEAT-Local Distill Lyapunov), which realizes collaborative decision-making between gateways and terminal nodes. HEAT-LDL combines the Actor-Critic architecture and the Lyapunov optimization method to achieve intelligent downlink control and gateway load balancing. When the signal quality is good, the network server uses the HEAT algorithm to schedule the terminal nodes. To improve the efficiency of autonomous decision-making of terminal nodes, HEAT-LDL performs cloud-edge knowledge distillation on the HEAT teacher model on the terminal node side. When the downlink decision instruction is lost, the terminal node uses the student model and the edge decider based on prior knowledge and local history to make collaborative autonomous decisions. Simulation experiments show that compared with the optimal results of all compared algorithms, HEAT-LDL improves the packet success rate and energy efficiency by 20.5% and 88.1%, respectively.


Data-driven Modality Fusion: An AI-enabled Framework for Large-Scale Sensor Network Management

Dutta, Hrishikesh, Minerva, Roberto, Alvi, Maira, Crespi, Noel

arXiv.org Artificial Intelligence

The development and operation of smart cities relyheavily on large-scale Internet-of-Things (IoT) networks and sensor infrastructures that continuously monitor various aspects of urban environments. These networks generate vast amounts of data, posing challenges related to bandwidth usage, energy consumption, and system scalability. This paper introduces a novel sensing paradigm called Data-driven Modality Fusion (DMF), designed to enhance the efficiency of smart city IoT network management. By leveraging correlations between timeseries data from different sensing modalities, the proposed DMF approach reduces the number of physical sensors required for monitoring, thereby minimizing energy expenditure, communication bandwidth, and overall deployment costs. The framework relocates computational complexity from the edge devices to the core, ensuring that resource-constrained IoT devices are not burdened with intensive processing tasks. DMF is validated using data from a real-world IoT deployment in Madrid, demonstrating the effectiveness of the proposed system in accurately estimating traffic, environmental, and pollution metrics from a reduced set of sensors. The proposed solution offers a scalable, efficient mechanism for managing urban IoT networks, while addressing issues of sensor failure and privacy concerns.


Vision-Based Incoming Traffic Estimator Using Deep Neural Network on General Purpose Embedded Hardware

Zoysa, K. G., Munasinghe, S. R.

arXiv.org Artificial Intelligence

Traffic management is a serious problem in many cities around the world. Even the suburban areas are now experiencing regular traffic congestion. Inappropriate traffic control wastes fuel, time, and the productivity of nations. Though traffic signals are used to improve traffic flow, they often cause problems due to inappropriate or obsolete timing that does not tally with the actual traffic intensity at the intersection. Traffic intensity determination based on statistical methods only gives the average intensity expected at any given time. However, to control traffic accurately, it is required to know the real-time traffic intensity. In this research, image processing and machine learning have been used to estimate actual traffic intensity in real time. General-purpose electronic hardware has been used for in-situ image processing based on the edge-detection method. A deep neural network (DNN) was trained to infer traffic intensity in each image in real time. The trained DNN estimated traffic intensity accurately in 90% of the real-time images during road tests. The electronic system was implemented on a Raspberry Pi single-board computer; hence, it is cost-effective for large-scale deployment.


Cost-Efficient Deployment of a Reliable Multi-UAV Unmanned Aerial System

Babu, Nithin, Popovski, Petar, Papadias, Constantinos B.

arXiv.org Artificial Intelligence

In this work, we study the trade-off between the reliability and the investment cost of an unmanned aerial system (UAS) consisting of a set of unmanned aerial vehicles (UAVs) carrying radio access nodes, called portable access points (PAPs)), deployed to serve a set of ground nodes (GNs). Using the proposed algorithm, a given geographical region is equivalently represented as a set of circular regions, where each circle represents the coverage region of a PAP. Then, the steady-state availability of the UAS is analytically derived by modelling it as a continuous time birth-death Markov decision process (MDP). Numerical evaluations show that the investment cost to guarantee a given steady-state availability to a set of GNs can be reduced by considering the traffic demand and distribution of GNs.


ChronosPerseus: Randomized Point-based Value Iteration with Importance Sampling for POSMDPs

Kohar, Richard, Rivest, François, Gosselin, Alain

arXiv.org Artificial Intelligence

In reinforcement learning, agents have successfully used environments modeled with Markov decision processes (MDPs). However, in many problem domains, an agent may suffer from noisy observations or random times until its subsequent decision. While partially observable Markov decision processes (POMDPs) have dealt with noisy observations, they have yet to deal with the unknown time aspect. Of course, one could discretize the time, but this leads to Bellman's Curse of Dimensionality. To incorporate continuous sojourn-time distributions in the agent's decision making, we propose that partially observable semi-Markov decision processes (POSMDPs) can be helpful in this regard. We extend \citet{Spaan2005a} randomized point-based value iteration (PBVI) \textsc{Perseus} algorithm used for POMDP to POSMDP by incorporating continuous sojourn time distributions and using importance sampling to reduce the solver complexity. We call this new PBVI algorithm with importance sampling for POSMDPs -- \textsc{ChronosPerseus}. This further allows for compressed complex POMDPs requiring temporal state information by moving this information into state sojourn time of a POMSDP. The second insight is that keeping a set of sampled times and weighting it by its likelihood can be used in a single backup; this helps further reduce the algorithm complexity. The solver also works on episodic and non-episodic problems. We conclude our paper with two examples, an episodic bus problem and a non-episodic maintenance problem.


RouteNet: Leveraging Graph Neural Networks for network modeling and optimization in SDN

Rusek, Krzysztof, Suárez-Varela, José, Almasan, Paul, Barlet-Ros, Pere, Cabellos-Aparicio, Albert

arXiv.org Artificial Intelligence

Network modeling is a key enabler to achieve efficient network operation in future self-driving Software-Defined Networks. However, we still lack functional network models able to produce accurate predictions of Key Performance Indicators (KPI) such as delay, jitter or loss at limited cost. In this paper we propose RouteNet, a novel network model based on Graph Neural Network (GNN) that is able to understand the complex relationship between topology, routing and input traffic to produce accurate estimates of the per-source/destination per-packet delay distribution and loss. RouteNet leverages the ability of GNNs to learn and model graph-structured information and as a result, our model is able to generalize over arbitrary topologies, routing schemes and traffic intensity. In our evaluation, we show that RouteNet is able to predict accurately the delay distribution (mean delay and jitter) and loss even in topologies, routing and traffic unseen in the training (worst case $R^{2}$ = 0.878). Also, we present several use-cases where we leverage the KPI predictions of our GNN model to achieve efficient routing optimization and network planning.


Resource-Constrained Scheduling for Maritime Traffic Management

Agussurja, Lucas (Singapore Management University) | Kumar, Akshat (Singapore Management University) | Lau, Hoong Chuin (Singapore Management University)

AAAI Conferences

We address the problem of mitigating congestion and preventing hotspots in busy water areas such as Singapore Straits and port waters. Increasing maritime traffic coupled with narrow waterways makes vessel schedule coordination for just-in-time arrival critical for navigational safety. Our contributions are: 1) We formulate the maritime traffic management problem based on the real case study of Singapore waters; 2) We model the problem as a variant of the resource-constrained project scheduling problem (RCPSP), and formulate mixed-integer and constraint programming (MIP/CP) formulations; 3) To improve the scalability, we develop a combinatorial Benders (CB) approach that is significantly more effective than standard MIP and CP formulations. We also develop symmetry breaking constraints and optimality cuts that further enhance the CB approach's effectiveness; 4) We develop a realistic maritime traffic simulator using electronic navigation charts of Singapore Straits. Our scheduling approach on synthetic problems and a real 55-day AIS dataset results in significant reduction of the traffic density while incurring minimal delays.