Kim, Donghyun
Deep learning for precipitation nowcasting: A survey from the perspective of time series forecasting
An, Sojung, Oh, Tae-Jin, Sohn, Eunha, Kim, Donghyun
Deep learning-based time series forecasting has dominated the short-term precipitation forecasting field with the help of its ability to estimate motion flow in high-resolution datasets. The growing interest in precipitation nowcasting offers substantial opportunities for the advancement of current forecasting technologies. Nevertheless, there has been a scarcity of in-depth surveys of time series precipitation forecasting using deep learning. Thus, this paper systemically reviews recent progress in time series precipitation forecasting models. Specifically, we investigate the following key points within background components, covering: i) preprocessing, ii) objective functions, and iii) evaluation metrics. We then categorize forecasting models into \textit{recursive} and \textit{multiple} strategies based on their approaches to predict future frames, investigate the impacts of models using the strategies, and performance assessments. Finally, we evaluate current deep learning-based models for precipitation forecasting on a public benchmark, discuss their limitations and challenges, and present some promising research directions. Our contribution lies in providing insights for a better understanding of time series precipitation forecasting and in aiding the development of robust AI solutions for the future.
Large Language Models meet Collaborative Filtering: An Efficient All-round LLM-based Recommender System
Kim, Sein, Kang, Hongseok, Choi, Seungyoon, Kim, Donghyun, Yang, Minchul, Park, Chanyoung
Collaborative filtering recommender systems (CF-RecSys) have shown successive results in enhancing the user experience on social media and e-commerce platforms. However, as CF-RecSys struggles under cold scenarios with sparse user-item interactions, recent strategies have focused on leveraging modality information of user/items (e.g., text or images) based on pre-trained modality encoders and Large Language Models (LLMs). Despite their effectiveness under cold scenarios, we observe that they underperform simple traditional collaborative filtering models under warm scenarios due to the lack of collaborative knowledge. In this work, we propose an efficient All-round LLM-based Recommender system, called A-LLMRec, that excels not only in the cold scenario but also in the warm scenario. Our main idea is to enable an LLM to directly leverage the collaborative knowledge contained in a pre-trained state-of-the-art CF-RecSys so that the emergent ability of the LLM as well as the high-quality user/item embeddings that are already trained by the state-of-the-art CF-RecSys can be jointly exploited. This approach yields two advantages: (1) model-agnostic, allowing for integration with various existing CF-RecSys, and (2) efficiency, eliminating the extensive fine-tuning typically required for LLM-based recommenders. Our extensive experiments on various real-world datasets demonstrate the superiority of A-LLMRec in various scenarios, including cold/warm, few-shot, cold user, and cross-domain scenarios. Beyond the recommendation task, we also show the potential of A-LLMRec in generating natural language outputs based on the understanding of the collaborative knowledge by performing a favorite genre prediction task. Our code is available at https://github.com/ghdtjr/A-LLMRec .
Learning Generic and Dynamic Locomotion of Humanoids Across Discrete Terrains
Yu, Shangqun, Perera, Nisal, Marew, Daniel, Kim, Donghyun
This paper addresses the challenge of terrain-adaptive dynamic locomotion in humanoid robots, a problem traditionally tackled by optimization-based methods or reinforcement learning (RL). Optimization-based methods, such as model-predictive control, excel in finding optimal reaction forces and achieving agile locomotion, especially in quadruped, but struggle with the nonlinear hybrid dynamics of legged systems and the real-time computation of step location, timing, and reaction forces. Conversely, RL-based methods show promise in navigating dynamic and rough terrains but are limited by their extensive data requirements. We introduce a novel locomotion architecture that integrates a neural network policy, trained through RL in simplified environments, with a state-of-the-art motion controller combining model-predictive control (MPC) and whole-body impulse control (WBIC). The policy efficiently learns high-level locomotion strategies, such as gait selection and step positioning, without the need for full dynamics simulations. This control architecture enables humanoid robots to dynamically navigate discrete terrains, making strategic locomotion decisions (e.g., walking, jumping, and leaping) based on ground height maps. Our results demonstrate that this integrated control architecture achieves dynamic locomotion with significantly fewer training samples than conventional RL-based methods and can be transferred to different humanoid platforms without additional training. The control architecture has been extensively tested in dynamic simulations, accomplishing terrain height-based dynamic locomotion for three different robots.
CAUS: A Dataset for Question Generation based on Human Cognition Leveraging Large Language Models
Shin, Minjung, Kim, Donghyun, Ryu, Jeh-Kwang
We introduce the Curious About Uncertain Scene (CAUS) dataset, designed to enable Large Language Models, specifically GPT-4, to emulate human cognitive processes for resolving uncertainties. Leveraging this dataset, we investigate the potential of LLMs to engage in questioning effectively. Our approach involves providing scene descriptions embedded with uncertainties to stimulate the generation of reasoning and queries. The queries are then classified according to multi-dimensional criteria. All procedures are facilitated by a collaborative system involving both LLMs and human researchers. Our results demonstrate that GPT-4 can effectively generate pertinent questions and grasp their nuances, particularly when given appropriate context and instructions. The study suggests that incorporating human-like questioning into AI models improves their ability to manage uncertainties, paving the way for future advancements in Artificial Intelligence (AI).
Impedance Matching: Enabling an RL-Based Running Jump in a Quadruped Robot
Guan, Neil, Yu, Shangqun, Zhu, Shifan, Kim, Donghyun
Replicating the remarkable athleticism seen in animals has long been a challenge in robotics control. Although Reinforcement Learning (RL) has demonstrated significant progress in dynamic legged locomotion control, the substantial sim-to-real gap often hinders the real-world demonstration of truly dynamic movements. We propose a new framework to mitigate this gap through frequency-domain analysis-based impedance matching between simulated and real robots. Our framework offers a structured guideline for parameter selection and the range for dynamics randomization in simulation, thus facilitating a safe sim-to-real transfer. The learned policy using our framework enabled jumps across distances of 55 cm and heights of 38 cm. The results are, to the best of our knowledge, one of the highest and longest running jumps demonstrated by an RL-based control policy in a real quadruped robot. Note that the achieved jumping height is approximately 85% of that obtained from a state-of-the-art trajectory optimization method, which can be seen as the physical limit for the given robot hardware. In addition, our control policy accomplished stable walking at speeds up to 2 m/s in the forward and backward directions, and 1 m/s in the sideway direction.
Visual Delta Generator with Large Multi-modal Models for Semi-supervised Composed Image Retrieval
Jang, Young Kyun, Kim, Donghyun, Meng, Zihang, Huynh, Dat, Lim, Ser-Nam
Composed Image Retrieval (CIR) is a task that retrieves images similar to a query, based on a provided textual modification. Current techniques rely on supervised learning for CIR models using labeled triplets of the reference image, text, target image. These specific triplets are not as commonly available as simple image-text pairs, limiting the widespread use of CIR and its scalability. On the other hand, zero-shot CIR can be relatively easily trained with image-caption pairs without considering the image-to-image relation, but this approach tends to yield lower accuracy. We propose a new semi-supervised CIR approach where we search for a reference and its related target images in auxiliary data and learn our large language model-based Visual Delta Generator (VDG) to generate text describing the visual difference (i.e., visual delta) between the two. VDG, equipped with fluent language knowledge and being model agnostic, can generate pseudo triplets to boost the performance of CIR models. Our approach significantly improves the existing supervised learning approaches and achieves state-of-the-art results on the CIR benchmarks.
LLM4SGG: Large Language Models for Weakly Supervised Scene Graph Generation
Kim, Kibum, Yoon, Kanghoon, Jeon, Jaehyeong, In, Yeonjun, Moon, Jinyoung, Kim, Donghyun, Park, Chanyoung
Weakly-Supervised Scene Graph Generation (WSSGG) research has recently emerged as an alternative to the fully-supervised approach that heavily relies on costly annotations. In this regard, studies on WSSGG have utilized image captions to obtain unlocalized triplets while primarily focusing on grounding the unlocalized triplets over image regions. However, they have overlooked the two issues involved in the triplet formation process from the captions: 1) Semantic over-simplification issue arises when extracting triplets from captions, where fine-grained predicates in captions are undesirably converted into coarse-grained predicates, resulting in a long-tailed predicate distribution, and 2) Low-density scene graph issue arises when aligning the triplets in the caption with entity/predicate classes of interest, where many triplets are discarded and not used in training, leading to insufficient supervision. To tackle the two issues, we propose a new approach, i.e., Large Language Model for weakly-supervised SGG (LLM4SGG), where we mitigate the two issues by leveraging the LLM's in-depth understanding of language and reasoning ability during the extraction of triplets from captions and alignment of entity/predicate classes with target data. To further engage the LLM in these processes, we adopt the idea of Chain-of-Thought and the in-context few-shot learning strategy. To validate the effectiveness of LLM4SGG, we conduct extensive experiments on Visual Genome and GQA datasets, showing significant improvements in both Recall@K and mean Recall@K compared to the state-of-the-art WSSGG methods. A further appeal is that LLM4SGG is data-efficient, enabling effective model training with a small amount of training images.
HyperCLOVA X Technical Report
Yoo, Kang Min, Han, Jaegeun, In, Sookyo, Jeon, Heewon, Jeong, Jisu, Kang, Jaewook, Kim, Hyunwook, Kim, Kyung-Min, Kim, Munhyong, Kim, Sungju, Kwak, Donghyun, Kwak, Hanock, Kwon, Se Jung, Lee, Bado, Lee, Dongsoo, Lee, Gichang, Lee, Jooho, Park, Baeseong, Shin, Seongjin, Yu, Joonsang, Baek, Seolki, Byeon, Sumin, Cho, Eungsup, Choe, Dooseok, Han, Jeesung, Jin, Youngkyun, Jun, Hyein, Jung, Jaeseung, Kim, Chanwoong, Kim, Jinhong, Kim, Jinuk, Lee, Dokyeong, Park, Dongwook, Sohn, Jeong Min, Han, Sujung, Heo, Jiae, Hong, Sungju, Jeon, Mina, Jung, Hyunhoon, Jung, Jungeun, Jung, Wangkyo, Kim, Chungjoon, Kim, Hyeri, Kim, Jonghyun, Kim, Min Young, Lee, Soeun, Park, Joonhee, Shin, Jieun, Yang, Sojin, Yoon, Jungsoon, Lee, Hwaran, Bae, Sanghwan, Cha, Jeehwan, Gylleus, Karl, Ham, Donghoon, Hong, Mihak, Hong, Youngki, Hong, Yunki, Jang, Dahyun, Jeon, Hyojun, Jeon, Yujin, Jeong, Yeji, Ji, Myunggeun, Jin, Yeguk, Jo, Chansong, Joo, Shinyoung, Jung, Seunghwan, Kim, Adrian Jungmyung, Kim, Byoung Hoon, Kim, Hyomin, Kim, Jungwhan, Kim, Minkyoung, Kim, Minseung, Kim, Sungdong, Kim, Yonghee, Kim, Youngjun, Kim, Youngkwan, Ko, Donghyeon, Lee, Dughyun, Lee, Ha Young, Lee, Jaehong, Lee, Jieun, Lee, Jonghyun, Lee, Jongjin, Lee, Min Young, Lee, Yehbin, Min, Taehong, Min, Yuri, Moon, Kiyoon, Oh, Hyangnam, Park, Jaesun, Park, Kyuyon, Park, Younghun, Seo, Hanbae, Seo, Seunghyun, Sim, Mihyun, Son, Gyubin, Yeo, Matt, Yeom, Kyung Hoon, Yoo, Wonjoon, You, Myungin, Ahn, Doheon, Ahn, Homin, Ahn, Joohee, Ahn, Seongmin, An, Chanwoo, An, Hyeryun, An, Junho, An, Sang-Min, Byun, Boram, Byun, Eunbin, Cha, Jongho, Chang, Minji, Chang, Seunggyu, Cho, Haesong, Cho, Youngdo, Choi, Dalnim, Choi, Daseul, Choi, Hyoseok, Choi, Minseong, Choi, Sangho, Choi, Seongjae, Choi, Wooyong, Chun, Sewhan, Go, Dong Young, Ham, Chiheon, Han, Danbi, Han, Jaemin, Hong, Moonyoung, Hong, Sung Bum, Hwang, Dong-Hyun, Hwang, Seongchan, Im, Jinbae, Jang, Hyuk Jin, Jang, Jaehyung, Jang, Jaeni, Jang, Sihyeon, Jang, Sungwon, Jeon, Joonha, Jeong, Daun, Jeong, Joonhyun, Jeong, Kyeongseok, Jeong, Mini, Jin, Sol, Jo, Hanbyeol, Jo, Hanju, Jo, Minjung, Jung, Chaeyoon, Jung, Hyungsik, Jung, Jaeuk, Jung, Ju Hwan, Jung, Kwangsun, Jung, Seungjae, Ka, Soonwon, Kang, Donghan, Kang, Soyoung, Kil, Taeho, Kim, Areum, Kim, Beomyoung, Kim, Byeongwook, Kim, Daehee, Kim, Dong-Gyun, Kim, Donggook, Kim, Donghyun, Kim, Euna, Kim, Eunchul, Kim, Geewook, Kim, Gyu Ri, Kim, Hanbyul, Kim, Heesu, Kim, Isaac, Kim, Jeonghoon, Kim, Jihye, Kim, Joonghoon, Kim, Minjae, Kim, Minsub, Kim, Pil Hwan, Kim, Sammy, Kim, Seokhun, Kim, Seonghyeon, Kim, Soojin, Kim, Soong, Kim, Soyoon, Kim, Sunyoung, Kim, Taeho, Kim, Wonho, Kim, Yoonsik, Kim, You Jin, Kim, Yuri, Kwon, Beomseok, Kwon, Ohsung, Kwon, Yoo-Hwan, Lee, Anna, Lee, Byungwook, Lee, Changho, Lee, Daun, Lee, Dongjae, Lee, Ha-Ram, Lee, Hodong, Lee, Hwiyeong, Lee, Hyunmi, Lee, Injae, Lee, Jaeung, Lee, Jeongsang, Lee, Jisoo, Lee, Jongsoo, Lee, Joongjae, Lee, Juhan, Lee, Jung Hyun, Lee, Junghoon, Lee, Junwoo, Lee, Se Yun, Lee, Sujin, Lee, Sungjae, Lee, Sungwoo, Lee, Wonjae, Lee, Zoo Hyun, Lim, Jong Kun, Lim, Kun, Lim, Taemin, Na, Nuri, Nam, Jeongyeon, Nam, Kyeong-Min, Noh, Yeonseog, Oh, Biro, Oh, Jung-Sik, Oh, Solgil, Oh, Yeontaek, Park, Boyoun, Park, Cheonbok, Park, Dongju, Park, Hyeonjin, Park, Hyun Tae, Park, Hyunjung, Park, Jihye, Park, Jooseok, Park, Junghwan, Park, Jungsoo, Park, Miru, Park, Sang Hee, Park, Seunghyun, Park, Soyoung, Park, Taerim, Park, Wonkyeong, Ryu, Hyunjoon, Ryu, Jeonghun, Ryu, Nahyeon, Seo, Soonshin, Seo, Suk Min, Shim, Yoonjeong, Shin, Kyuyong, Shin, Wonkwang, Sim, Hyun, Sim, Woongseob, Soh, Hyejin, Son, Bokyong, Son, Hyunjun, Son, Seulah, Song, Chi-Yun, Song, Chiyoung, Song, Ka Yeon, Song, Minchul, Song, Seungmin, Wang, Jisung, Yeo, Yonggoo, Yi, Myeong Yeon, Yim, Moon Bin, Yoo, Taehwan, Yoo, Youngjoon, Yoon, Sungmin, Yoon, Young Jin, Yu, Hangyeol, Yu, Ui Seon, Zuo, Xingdong, Bae, Jeongin, Bae, Joungeun, Cho, Hyunsoo, Cho, Seonghyun, Cho, Yongjin, Choi, Taekyoon, Choi, Yera, Chung, Jiwan, Han, Zhenghui, Heo, Byeongho, Hong, Euisuk, Hwang, Taebaek, Im, Seonyeol, Jegal, Sumin, Jeon, Sumin, Jeong, Yelim, Jeong, Yonghyun, Jiang, Can, Jiang, Juyong, Jin, Jiho, Jo, Ara, Jo, Younghyun, Jung, Hoyoun, Jung, Juyoung, Kang, Seunghyeong, Kim, Dae Hee, Kim, Ginam, Kim, Hangyeol, Kim, Heeseung, Kim, Hyojin, Kim, Hyojun, Kim, Hyun-Ah, Kim, Jeehye, Kim, Jin-Hwa, Kim, Jiseon, Kim, Jonghak, Kim, Jung Yoon, Kim, Rak Yeong, Kim, Seongjin, Kim, Seoyoon, Kim, Sewon, Kim, Sooyoung, Kim, Sukyoung, Kim, Taeyong, Ko, Naeun, Koo, Bonseung, Kwak, Heeyoung, Kwon, Haena, Kwon, Youngjin, Lee, Boram, Lee, Bruce W., Lee, Dagyeong, Lee, Erin, Lee, Euijin, Lee, Ha Gyeong, Lee, Hyojin, Lee, Hyunjeong, Lee, Jeeyoon, Lee, Jeonghyun, Lee, Jongheok, Lee, Joonhyung, Lee, Junhyuk, Lee, Mingu, Lee, Nayeon, Lee, Sangkyu, Lee, Se Young, Lee, Seulgi, Lee, Seung Jin, Lee, Suhyeon, Lee, Yeonjae, Lee, Yesol, Lee, Youngbeom, Lee, Yujin, Li, Shaodong, Liu, Tianyu, Moon, Seong-Eun, Moon, Taehong, Nihlenramstroem, Max-Lasse, Oh, Wonseok, Oh, Yuri, Park, Hongbeen, Park, Hyekyung, Park, Jaeho, Park, Nohil, Park, Sangjin, Ryu, Jiwon, Ryu, Miru, Ryu, Simo, Seo, Ahreum, Seo, Hee, Seo, Kangdeok, Shin, Jamin, Shin, Seungyoun, Sin, Heetae, Wang, Jiangping, Wang, Lei, Xiang, Ning, Xiao, Longxiang, Xu, Jing, Yi, Seonyeong, Yoo, Haanju, Yoo, Haneul, Yoo, Hwanhee, Yu, Liang, Yu, Youngjae, Yuan, Weijie, Zeng, Bo, Zhou, Qian, Cho, Kyunghyun, Ha, Jung-Woo, Park, Joonsuk, Hwang, Jihyun, Kwon, Hyoung Jo, Kwon, Soonyong, Lee, Jungyeon, Lee, Seungho, Lim, Seonghyeon, Noh, Hyunkyung, Choi, Seungho, Lee, Sang-Woo, Lim, Jung Hwa, Sung, Nako
We introduce HyperCLOVA X, a family of large language models (LLMs) tailored to the Korean language and culture, along with competitive capabilities in English, math, and coding. HyperCLOVA X was trained on a balanced mix of Korean, English, and code data, followed by instruction-tuning with high-quality human-annotated datasets while abiding by strict safety guidelines reflecting our commitment to responsible AI. The model is evaluated across various benchmarks, including comprehensive reasoning, knowledge, commonsense, factuality, coding, math, chatting, instruction-following, and harmlessness, in both Korean and English. HyperCLOVA X exhibits strong reasoning capabilities in Korean backed by a deep understanding of the language and cultural nuances. Further analysis of the inherent bilingual nature and its extension to multilingualism highlights the model's cross-lingual proficiency and strong generalization ability to untargeted languages, including machine translation between several language pairs and cross-lingual inference tasks. We believe that HyperCLOVA X can provide helpful guidance for regions or countries in developing their sovereign LLMs.
StaccaToe: A Single-Leg Robot that Mimics the Human Leg and Toe
Perera, Nisal, Yu, Shangqun, Marew, Daniel, Tang, Mack, Suzuki, Ken, McCormack, Aidan, Zhu, Shifan, Kim, Yong-Jae, Kim, Donghyun
We introduce StaccaToe, a human-scale, electric motor-powered single-leg robot designed to rival the agility of human locomotion through two distinctive attributes: an actuated toe and a co-actuation configuration inspired by the human leg. Leveraging the foundational design of HyperLeg's lower leg mechanism, we develop a stand-alone robot by incorporating new link designs, custom-designed power electronics, and a refined control system. Unlike previous jumping robots that rely on either special mechanisms (e.g., springs and clutches) or hydraulic/pneumatic actuators, StaccaToe employs electric motors without energy storage mechanisms. This choice underscores our ultimate goal of developing a practical, high-performance humanoid robot capable of human-like, stable walking as well as explosive dynamic movements. In this paper, we aim to empirically evaluate the balance capability and the exertion of explosive ground reaction forces of our toe and co-actuation mechanisms. Throughout extensive hardware and controller development, StaccaToe showcases its control fidelity by demonstrating a balanced tip-toe stance and dynamic jump. This study is significant for three key reasons: 1) StaccaToe represents the first human-scale, electric motor-driven single-leg robot to execute dynamic maneuvers without relying on specialized mechanisms; 2) our research provides empirical evidence of the benefits of replicating critical human leg attributes in robotic design; and 3) we explain the design process for creating agile legged robots, the details that have been scantily covered in academic literature.
WoLF: Wide-scope Large Language Model Framework for CXR Understanding
Kang, Seil, Kim, Donghyun, Kim, Junhyeok, Lee, Hyo Kyung, Hwang, Seong Jae
Significant methodological strides have been made toward Chest X-ray (CXR) understanding via modern vision-language models (VLMs), demonstrating impressive Visual Question Answering (VQA) and CXR report generation abilities. However, existing CXR understanding frameworks still possess several procedural caveats. (1) Previous methods solely use CXR reports, which are insufficient for comprehensive Visual Question Answering (VQA), especially when additional health-related data like medication history and prior diagnoses are needed. (2) Previous methods use raw CXR reports, which are often arbitrarily structured. While modern language models can understand various text formats, restructuring reports for clearer, organized anatomy-based information could enhance their usefulness. (3) Current evaluation methods for CXR-VQA primarily emphasize linguistic correctness, lacking the capability to offer nuanced assessments of the generated answers. In this work, to address the aforementioned caveats, we introduce WoLF, a Wide-scope Large Language Model Framework for CXR understanding. To resolve (1), we capture multi-faceted records of patients, which are utilized for accurate diagnoses in real-world clinical scenarios. Specifically, we adopt the Electronic Health Records (EHR) to generate instruction-following data suited for CXR understanding. Regarding (2), we enhance report generation performance by decoupling knowledge in CXR reports based on anatomical structure even within the attention step via masked attention. To address (3), we introduce an AI-evaluation protocol optimized for assessing the capabilities of LLM. Through extensive experimental validations, WoLF demonstrates superior performance over other models on MIMIC-CXR in the AI-evaluation arena about VQA (up to +9.47%p mean score) and by metrics about report generation (+7.3%p BLEU-1).