Not enough data to create a plot.
Try a different view from the menu above.
Park, Chanyoung
Predicting Density of States via Multi-modal Transformer
Lee, Namkyeong, Noh, Heewoong, Kim, Sungwon, Hyun, Dongmin, Na, Gyoung S., Park, Chanyoung
The density of states (DOS) is a spectral property of materials, which provides fundamental insights on various characteristics of materials. In this paper, we propose a model to predict the DOS by reflecting the nature of DOS: DOS determines the general distribution of states as a function of energy. Specifically, we integrate the heterogeneous information obtained from the crystal structure and the energies via multi-modal transformer, thereby modeling the complex relationships between the atoms in the crystal structure, and various energy levels. Extensive experiments on two types of DOS, i.e., Phonon DOS and Electron DOS, with various real-world scenarios demonstrate the superiority of DOSTransformer. Despite the recent progress of machine learning (ML) in materials science, most ML models developed in the field have been focused on material properties consisting of single-valued properties Kong et al. (2022), e.g., band gap energy Lee et al. (2016), formation energy Ward et al. (2016), and Fermi energy Xie & Grossman (2018). On the other hand, spectral properties are ubiquitous in materials science, characterizing various properties of materials, e.g., X-ray absorption, dielectric function, and electronic density of states Kong et al. (2022) (See Figure 1(a)).
Heterogeneous Graph Learning for Multi-modal Medical Data Analysis
Kim, Sein, Lee, Namkyeong, Lee, Junseok, Hyun, Dongmin, Park, Chanyoung
Routine clinical visits of a patient produce not only image data, but also non-image data containing clinical information regarding the patient, i.e., medical data is multi-modal in nature. Such heterogeneous modalities offer different and complementary perspectives on the same patient, resulting in more accurate clinical decisions when they are properly combined. However, despite its significance, how to effectively fuse the multi-modal medical data into a unified framework has received relatively little attention. In this paper, we propose an effective graph-based framework called HetMed (Heterogeneous Graph Learning for Multi-modal Medical Data Analysis) for fusing the multi-modal medical data. Specifically, we construct a multiplex network that incorporates multiple types of non-image features of patients to capture the complex relationship between patients in a systematic way, which leads to more accurate clinical decisions. Extensive experiments on various real-world datasets demonstrate the superiority and practicality of HetMed. The source code for HetMed is available at https://github.com/Sein-Kim/Multimodal-Medical.
Software Simulation and Visualization of Quantum Multi-Drone Reinforcement Learning
Park, Chanyoung, Kim, Jae Pyoung, Yun, Won Joon, Park, Soohyun, Jung, Soyi, Kim, Joongheon
Quantum machine learning (QML) has received a lot of attention according to its light training parameter numbers and speeds; and the advances of QML lead to active research on quantum multi-agent reinforcement learning (QMARL). Existing classical multi-agent reinforcement learning (MARL) features non-stationarity and uncertain properties. Therefore, this paper presents a simulation software framework for novel QMARL to control autonomous multi-drones, i.e., quantum multi-drone reinforcement learning. Our proposed framework accomplishes reasonable reward convergence and service quality performance with fewer trainable parameters. Furthermore, it shows more stable training results. Lastly, our proposed software allows us to analyze the training process and results.
Multi-Agent Deep Reinforcement Learning for Efficient Passenger Delivery in Urban Air Mobility
Park, Chanyoung, Park, Soohyun, Kim, Gyu Seon, Jung, Soyi, Kim, Jae-Hyun, Kim, Joongheon
It has been considered that urban air mobility (UAM), also known as drone-taxi or electrical vertical takeoff and landing (eVTOL), will play a key role in future transportation. By putting UAM into practical future transportation, several benefits can be realized, i.e., (i) the total travel time of passengers can be reduced compared to traditional transportation and (ii) there is no environmental pollution and no special labor costs to operate the system because electric batteries will be used in UAM system. However, there are various dynamic and uncertain factors in the flight environment, i.e., passenger sudden service requests, battery discharge, and collision among UAMs. Therefore, this paper proposes a novel cooperative MADRL algorithm based on centralized training and distributed execution (CTDE) concepts for reliable and efficient passenger delivery in UAM networks. According to the performance evaluation results, we confirm that the proposed algorithm outperforms other existing algorithms in terms of the number of serviced passengers increase (30%) and the waiting time per serviced passenger decrease (26%).
Coordinated Multi-Agent Reinforcement Learning for Unmanned Aerial Vehicle Swarms in Autonomous Mobile Access Applications
Park, Chanyoung, Lee, Haemin, Yun, Won Joon, Jung, Soyi, Kim, Joongheon
Abstract--This paper proposes a novel centralized training and distributed execution (CTDE)-based multi-agent deep reinforcement learning (MADRL) method for multiple unmanned aerial vehicles (UAVs) control in autonomous mobile access applications. For the purpose, a single neural network is utilized in centralized training for cooperation among multiple agents while maximizing the total quality of service (QoS) in mobile access applications. In order to provide seamless network services in crowded, wild, or extreme areas, which is one of the potential scenarios in 6G networks, the use of unmanned aerial vehicles (UAVs) is widely considered where the UAVs are autonomously operated with deep learning algorithms [1]. In this paper, a multi-agent deep reinforcement learning (MADRL) algorithm is designed and evaluated for autonomous is good enough to utilize the desired performance of multiagent aerial mobile base-station (BS) network coordination cooperation and coordination. In order to neural network, a cost function is required, and the function is achieve our desired goal, one of the promising approaches is designed to maximize the quality of services (QoS) in mobile centralized training and distributed execution (CTDE) where access applications.
Generating Multiple-Length Summaries via Reinforcement Learning for Unsupervised Sentence Summarization
Hyun, Dongmin, Wang, Xiting, Park, Chanyoung, Xie, Xing, Yu, Hwanjo
Sentence summarization shortens given texts while maintaining core contents of the texts. Unsupervised approaches have been studied to summarize texts without human-written summaries. However, recent unsupervised models are extractive, which remove words from texts and thus they are less flexible than abstractive summarization. In this work, we devise an abstractive model based on reinforcement learning without ground-truth summaries. We formulate the unsupervised summarization based on the Markov decision process with rewards representing the summary quality. To further enhance the summary quality, we develop a multi-summary learning mechanism that generates multiple summaries with varying lengths for a given text, while making the summaries mutually enhance each other. Experimental results show that the proposed model substantially outperforms both abstractive and extractive models, yet frequently generating new words not contained in input texts.
LTE4G: Long-Tail Experts for Graph Neural Networks
Yun, Sukwon, Kim, Kibum, Yoon, Kanghoon, Park, Chanyoung
Existing Graph Neural Networks (GNNs) usually assume a balanced situation where both the class distribution and the node degree distribution are balanced. However, in real-world situations, we often encounter cases where a few classes (i.e., head class) dominate other classes (i.e., tail class) as well as in the node degree perspective, and thus naively applying existing GNNs eventually fall short of generalizing to the tail cases. Although recent studies proposed methods to handle long-tail situations on graphs, they only focus on either the class long-tailedness or the degree long-tailedness. In this paper, we propose a novel framework for training GNNs, called Long-Tail Experts for Graphs (LTE4G), which jointly considers the class long-tailedness, and the degree long-tailedness for node classification. The core idea is to assign an expert GNN model to each subset of nodes that are split in a balanced manner considering both the class and degree long-tailedness. After having trained an expert for each balanced subset, we adopt knowledge distillation to obtain two class-wise students, i.e., Head class student and Tail class student, each of which is responsible for classifying nodes in the head classes and tail classes, respectively. We demonstrate that LTE4G outperforms a wide range of state-of-the-art methods in node classification evaluated on both manual and natural imbalanced graphs. The source code of LTE4G can be found at https://github.com/SukwonYun/LTE4G.
Augmentation-Free Self-Supervised Learning on Graphs
Lee, Namkyeong, Lee, Junseok, Park, Chanyoung
Inspired by the recent success of self-supervised methods applied on images, self-supervised learning on graph structured data has seen rapid growth especially centered on augmentation-based contrastive methods. However, we argue that without carefully designed augmentation techniques, augmentations on graphs may behave arbitrarily in that the underlying semantics of graphs can drastically change. As a consequence, the performance of existing augmentation-based methods is highly dependent on the choice of augmentation scheme, i.e., hyperparameters associated with augmentations. In this paper, we propose a novel augmentation-free self-supervised learning framework for graphs, named AFGRL. Specifically, we generate an alternative view of a graph by discovering nodes that share the local structural information and the global semantics with the graph. Extensive experiments towards various node-level tasks, i.e., node classification, clustering, and similarity search on various real-world datasets demonstrate the superiority of AFGRL. The source code for AFGRL is available at https://github.com/Namkyeong/AFGRL.
Learnable Structural Semantic Readout for Graph Classification
Lee, Dongha, Kim, Su, Lee, Seonghyeon, Park, Chanyoung, Yu, Hwanjo
With the great success of deep learning in various domains, graph neural networks (GNNs) also become a dominant approach to graph classification. By the help of a global readout operation that simply aggregates all node (or node-cluster) representations, existing GNN classifiers obtain a graph-level representation of an input graph and predict its class label using the representation. However, such global aggregation does not consider the structural information of each node, which results in information loss on the global structure. Particularly, it limits the discrimination power by enforcing the same weight parameters of the classifier for all the node representations; in practice, each of them contributes to target classes differently depending on its structural semantic. In this work, we propose structural semantic readout (SSRead) to summarize the node representations at the position-level, which allows to model the position-specific weight parameters for classification as well as to effectively capture the graph semantic relevant to the global structure. Given an input graph, SSRead aims to identify structurally-meaningful positions by using the semantic alignment between its nodes and structural prototypes, which encode the prototypical features of each position. The structural prototypes are optimized to minimize the alignment cost for all training graphs, while the other GNN parameters are trained to predict the class labels. Our experimental results demonstrate that SSRead significantly improves the classification performance and interpretability of GNN classifiers while being compatible with a variety of aggregation functions, GNN architectures, and learning frameworks.
HDMI: High-order Deep Multiplex Infomax
Jing, Baoyu, Park, Chanyoung, Tong, Hanghang
Networks have been widely used to represent the relations between objects such as academic networks and social networks, and learning embedding for networks has thus garnered plenty of research attention. Self-supervised network representation learning aims at extracting node embedding without external supervision. Recently, maximizing the mutual information between the local node embedding and the global summary (e.g. Deep Graph Infomax, or DGI for short) has shown promising results on many downstream tasks such as node classification. However, there are two major limitations of DGI. Firstly, DGI merely considers the extrinsic supervision signal (i.e., the mutual information between node embedding and global summary) while ignores the intrinsic signal (i.e., the mutual dependence between node embedding and node attributes). Secondly, nodes in a real-world network are usually connected by multiple edges with different relations, while DGI does not fully explore the various relations among nodes. To address the above-mentioned problems, we propose a novel framework, called High-order Deep Multiplex Infomax (HDMI), for learning node embedding on multiplex networks in a self-supervised way. To be more specific, we first design a joint supervision signal containing both extrinsic and intrinsic mutual information by high-order mutual information, and we propose a High-order Deep Infomax (HDI) to optimize the proposed supervision signal. Then we propose an attention based fusion module to combine node embedding from different layers of the multiplex network. Finally, we evaluate the proposed HDMI on various downstream tasks such as unsupervised clustering and supervised classification. The experimental results show that HDMI achieves state-of-the-art performance on these tasks.