Goto

Collaborating Authors

 Park, Soohyun


Handover Protocol Learning for LEO Satellite Networks: Access Delay and Collision Minimization

arXiv.org Artificial Intelligence

This study presents a novel deep reinforcement learning (DRL)-based handover (HO) protocol, called DHO, specifically designed to address the persistent challenge of long propagation delays in low-Earth orbit (LEO) satellite networks' HO procedures. DHO skips the Measurement Report (MR) in the HO procedure by leveraging its predictive capabilities after being trained with a pre-determined LEO satellite orbital pattern. This simplification eliminates the propagation delay incurred during the MR phase, while still providing effective HO decisions. The proposed DHO outperforms the legacy HO protocol across diverse network conditions in terms of access delay, collision rate, and handover success rate, demonstrating the practical applicability of DHO in real-world networks. Furthermore, the study examines the trade-off between access delay and collision rate and also evaluates the training performance and convergence of DHO using various DRL algorithms.


Realizing Stabilized Landing for Computation-Limited Reusable Rockets: A Quantum Reinforcement Learning Approach

arXiv.org Artificial Intelligence

The advent of reusable rockets has heralded a new era in space exploration, reducing the costs of launching satellites by a significant factor. Traditional rockets were disposable, but the design of reusable rockets for repeated use has revolutionized the financial dynamics of space missions. The most critical phase of reusable rockets is the landing stage, which involves managing the tremendous speed and attitude for safe recovery. The complexity of this task presents new challenges for control systems, specifically in terms of precision and adaptability. Classical control systems like the proportional-integral-derivative (PID) controller lack the flexibility to adapt to dynamic system changes, making them costly and time-consuming to redesign of controller. This paper explores the integration of quantum reinforcement learning into the control systems of reusable rockets as a promising alternative. Unlike classical reinforcement learning, quantum reinforcement learning uses quantum bits that can exist in superposition, allowing for more efficient information encoding and reducing the number of parameters required. This leads to increased computational efficiency, reduced memory requirements, and more stable and predictable performance. Due to the nature of reusable rockets, which must be light, heavy computers cannot fit into them. In the reusable rocket scenario, quantum reinforcement learning, which has reduced memory requirements due to fewer parameters, is a good solution.


Investigation of factors regarding the effects of COVID-19 pandemic on college students' depression by quantum annealer

arXiv.org Artificial Intelligence

Diverse cases regarding the impact, with its related factors, of the COVID-19 pandemic on mental health have been reported in previous studies. College student groups have been frequently selected as the target population in previous studies because they are easily affected by pandemics. In this study, multivariable datasets were collected from 751 college students based on the complex relationships between various mental health factors. We utilized quantum annealing (QA)-based feature selection algorithms that were executed by commercial D-Wave quantum computers to determine the changes in the relative importance of the associated factors before and after the pandemic. Multivariable linear regression (MLR) and XGBoost models were also applied to validate the QA-based algorithms. Based on the experimental results, we confirm that QA-based algorithms have comparable capabilities in factor analysis research to the MLR models that have been widely used in previous studies. Furthermore, the performance of the QA-based algorithms was validated through the important factor results from the algorithms. Pandemic-related factors (e.g., confidence in the social system) and psychological factors (e.g., decision-making in uncertain situations) were more important in post-pandemic conditions. We believe that our study will serve as a reference for researchers studying similar topics.


Quantum Multi-Agent Reinforcement Learning for Autonomous Mobility Cooperation

arXiv.org Artificial Intelligence

For Industry 4.0 Revolution, cooperative autonomous mobility systems are widely used based on multi-agent reinforcement learning (MARL). However, the MARL-based algorithms suffer from huge parameter utilization and convergence difficulties with many agents. To tackle these problems, a quantum MARL (QMARL) algorithm based on the concept of actor-critic network is proposed, which is beneficial in terms of scalability, to deal with the limitations in the noisy intermediate-scale quantum (NISQ) era. Additionally, our QMARL is also beneficial in terms of efficient parameter utilization and fast convergence due to quantum supremacy. Note that the reward in our QMARL is defined as task precision over computation time in multiple agents, thus, multi-agent cooperation can be realized. For further improvement, an additional technique for scalability is proposed, which is called projection value measure (PVM). Based on PVM, our proposed QMARL can achieve the highest reward, by reducing the action dimension into a logarithmic-scale. Finally, we can conclude that our proposed QMARL with PVM outperforms the other algorithms in terms of efficient parameter utilization, fast convergence, and scalability.


Two Tales of Platoon Intelligence for Autonomous Mobility Control: Enabling Deep Learning Recipes

arXiv.org Artificial Intelligence

When applied to autonomous mobility applications, RL can be used to derive optimal control In the fast-paced world of technological advancements, strategies for maintaining safety, efficiency, and robustness in autonomous mobility has emerged as a transformative innovation, various traffic situations. Furthermore, in order to control the dramatically reshaping numerous aspects of human life, platoon, the use of single-agent RL is not suitable because such as transportation, logistics, and surveillance [1]. These all agents will identically operate when they are located in a complex systems depend on advanced algorithms, sensors, and same space and time with same action-reward settings. Therefore, communication networks to carry out their tasks smoothly for realizing the cooperation and coordination among and proficiently with their own objectives [2]. One crucial multiple agents, multi-agent RL (MARL) algorithms should element that supports the successful functioning of these be utilized [4]-[6]. Among various MARL algorithms, this systems, particularly when operating as a coordinated group, paper considers communication network (CommNet) which is the efficient sharing of information among multiple mobility is widely and actively used in modern distributed computing platforms.


Cooperative Multi-Agent Deep Reinforcement Learning for Reliable and Energy-Efficient Mobile Access via Multi-UAV Control

arXiv.org Artificial Intelligence

This paper addresses a novel multi-agent deep reinforcement learning (MADRL)-based positioning algorithm for multiple unmanned aerial vehicles (UAVs) collaboration (i.e., UAVs work as mobile base stations). The primary objective of the proposed algorithm is to establish dependable mobile access networks for cellular vehicle-to-everything (C-V2X) communication, thereby facilitating the realization of high-quality intelligent transportation systems (ITS). The reliable mobile access services can be achieved in following two ways, i.e., i) energy-efficient UAV operation and ii) reliable wireless communication services. For energy-efficient UAV operation, the reward of our proposed MADRL algorithm contains the features for UAV energy consumption models in order to realize efficient operations. Furthermore, for reliable wireless communication services, the quality of service (QoS) requirements of individual users are considered as a part of rewards and 60GHz mmWave radio is used for mobile access. This paper considers the 60GHz mmWave access for utilizing the benefits of i) ultra-wide-bandwidth for multi-Gbps high-speed communications and ii) high-directional communications for spatial reuse that is obviously good for densely deployed users. Lastly, the comprehensive and data-intensive performance evaluation of the proposed MADRL-based algorithm for multi-UAV positioning is conducted in this paper. The results of these evaluations demonstrate that the proposed algorithm outperforms other existing algorithms.


Multi-Agent Reinforcement Learning for Cooperative Air Transportation Services in City-Wide Autonomous Urban Air Mobility

arXiv.org Artificial Intelligence

The development of urban-air-mobility (UAM) is rapidly progressing with spurs, and the demand for efficient transportation management systems is a rising need due to the multifaceted environmental uncertainties. Thus, this paper proposes a novel air transportation service management algorithm based on multi-agent deep reinforcement learning (MADRL) to address the challenges of multi-UAM cooperation. Specifically, the proposed algorithm in this paper is based on communication network (CommNet) method utilizing centralized training and distributed execution (CTDE) in multiple UAMs for providing efficient air transportation services to passengers collaboratively. Furthermore, this paper adopts actual vertiport maps and UAM specifications for constructing realistic air transportation networks. By evaluating the performance of the proposed algorithm in data-intensive simulations, the results show that the proposed algorithm outperforms existing approaches in terms of air transportation service quality. Furthermore, there are no inferior UAMs by utilizing parameter sharing in CommNet and a centralized critic network in CTDE. Therefore, it can be confirmed that the research results in this paper can provide a promising solution for autonomous air transportation management systems in city-wide urban areas.


Quantum Multi-Agent Actor-Critic Networks for Cooperative Mobile Access in Multi-UAV Systems

arXiv.org Artificial Intelligence

This paper proposes a novel algorithm, named quantum multi-agent actor-critic networks (QMACN) for autonomously constructing a robust mobile access system employing multiple unmanned aerial vehicles (UAVs). In the context of facilitating collaboration among multiple unmanned aerial vehicles (UAVs), the application of multi-agent reinforcement learning (MARL) techniques is regarded as a promising approach. These methods enable UAVs to learn collectively, optimizing their actions within a shared environment, ultimately leading to more efficient cooperative behavior. Furthermore, the principles of a quantum computing (QC) are employed in our study to enhance the training process and inference capabilities of the UAVs involved. By leveraging the unique computational advantages of quantum computing, our approach aims to boost the overall effectiveness of the UAV system. However, employing a QC introduces scalability challenges due to the near intermediate-scale quantum (NISQ) limitation associated with qubit usage. The proposed algorithm addresses this issue by implementing a quantum centralized critic, effectively mitigating the constraints imposed by NISQ limitations. Additionally, the advantages of the QMACN with performance improvements in terms of training speed and wireless service quality are verified via various data-intensive evaluations. Furthermore, this paper validates that a noise injection scheme can be used for handling environmental uncertainties in order to realize robust mobile access.


Entropy-Aware Similarity for Balanced Clustering: A Case Study with Melanoma Detection

arXiv.org Artificial Intelligence

Clustering data is an unsupervised learning approach that aims to divide a set of data points into multiple groups. It is a crucial yet demanding subject in machine learning and data mining. Its successful applications span various fields. However, conventional clustering techniques necessitate the consideration of balance significance in specific applications. Therefore, this paper addresses the challenge of imbalanced clustering problems and presents a new method for balanced clustering by utilizing entropy-aware similarity, which can be defined as the degree of balances. We have coined the term, entropy-aware similarity for balanced clustering (EASB), which maximizes balance during clustering by complementary clustering of unbalanced data and incorporating entropy in a novel similarity formula that accounts for both angular differences and distances. The effectiveness of the proposed approach is evaluated on actual melanoma medial data, specifically the International Skin Imaging Collaboration (ISIC) 2019 and 2020 challenge datasets, to demonstrate how it can successfully cluster the data while preserving balance. Lastly, we can confirm that the proposed method exhibited outstanding performance in detecting melanoma, comparing to classical methods.


Software Simulation and Visualization of Quantum Multi-Drone Reinforcement Learning

arXiv.org Artificial Intelligence

Quantum machine learning (QML) has received a lot of attention according to its light training parameter numbers and speeds; and the advances of QML lead to active research on quantum multi-agent reinforcement learning (QMARL). Existing classical multi-agent reinforcement learning (MARL) features non-stationarity and uncertain properties. Therefore, this paper presents a simulation software framework for novel QMARL to control autonomous multi-drones, i.e., quantum multi-drone reinforcement learning. Our proposed framework accomplishes reasonable reward convergence and service quality performance with fewer trainable parameters. Furthermore, it shows more stable training results. Lastly, our proposed software allows us to analyze the training process and results.