Shin, Kyuyong
HyperCLOVA X Technical Report
Yoo, Kang Min, Han, Jaegeun, In, Sookyo, Jeon, Heewon, Jeong, Jisu, Kang, Jaewook, Kim, Hyunwook, Kim, Kyung-Min, Kim, Munhyong, Kim, Sungju, Kwak, Donghyun, Kwak, Hanock, Kwon, Se Jung, Lee, Bado, Lee, Dongsoo, Lee, Gichang, Lee, Jooho, Park, Baeseong, Shin, Seongjin, Yu, Joonsang, Baek, Seolki, Byeon, Sumin, Cho, Eungsup, Choe, Dooseok, Han, Jeesung, Jin, Youngkyun, Jun, Hyein, Jung, Jaeseung, Kim, Chanwoong, Kim, Jinhong, Kim, Jinuk, Lee, Dokyeong, Park, Dongwook, Sohn, Jeong Min, Han, Sujung, Heo, Jiae, Hong, Sungju, Jeon, Mina, Jung, Hyunhoon, Jung, Jungeun, Jung, Wangkyo, Kim, Chungjoon, Kim, Hyeri, Kim, Jonghyun, Kim, Min Young, Lee, Soeun, Park, Joonhee, Shin, Jieun, Yang, Sojin, Yoon, Jungsoon, Lee, Hwaran, Bae, Sanghwan, Cha, Jeehwan, Gylleus, Karl, Ham, Donghoon, Hong, Mihak, Hong, Youngki, Hong, Yunki, Jang, Dahyun, Jeon, Hyojun, Jeon, Yujin, Jeong, Yeji, Ji, Myunggeun, Jin, Yeguk, Jo, Chansong, Joo, Shinyoung, Jung, Seunghwan, Kim, Adrian Jungmyung, Kim, Byoung Hoon, Kim, Hyomin, Kim, Jungwhan, Kim, Minkyoung, Kim, Minseung, Kim, Sungdong, Kim, Yonghee, Kim, Youngjun, Kim, Youngkwan, Ko, Donghyeon, Lee, Dughyun, Lee, Ha Young, Lee, Jaehong, Lee, Jieun, Lee, Jonghyun, Lee, Jongjin, Lee, Min Young, Lee, Yehbin, Min, Taehong, Min, Yuri, Moon, Kiyoon, Oh, Hyangnam, Park, Jaesun, Park, Kyuyon, Park, Younghun, Seo, Hanbae, Seo, Seunghyun, Sim, Mihyun, Son, Gyubin, Yeo, Matt, Yeom, Kyung Hoon, Yoo, Wonjoon, You, Myungin, Ahn, Doheon, Ahn, Homin, Ahn, Joohee, Ahn, Seongmin, An, Chanwoo, An, Hyeryun, An, Junho, An, Sang-Min, Byun, Boram, Byun, Eunbin, Cha, Jongho, Chang, Minji, Chang, Seunggyu, Cho, Haesong, Cho, Youngdo, Choi, Dalnim, Choi, Daseul, Choi, Hyoseok, Choi, Minseong, Choi, Sangho, Choi, Seongjae, Choi, Wooyong, Chun, Sewhan, Go, Dong Young, Ham, Chiheon, Han, Danbi, Han, Jaemin, Hong, Moonyoung, Hong, Sung Bum, Hwang, Dong-Hyun, Hwang, Seongchan, Im, Jinbae, Jang, Hyuk Jin, Jang, Jaehyung, Jang, Jaeni, Jang, Sihyeon, Jang, Sungwon, Jeon, Joonha, Jeong, Daun, Jeong, Joonhyun, Jeong, Kyeongseok, Jeong, Mini, Jin, Sol, Jo, Hanbyeol, Jo, Hanju, Jo, Minjung, Jung, Chaeyoon, Jung, Hyungsik, Jung, Jaeuk, Jung, Ju Hwan, Jung, Kwangsun, Jung, Seungjae, Ka, Soonwon, Kang, Donghan, Kang, Soyoung, Kil, Taeho, Kim, Areum, Kim, Beomyoung, Kim, Byeongwook, Kim, Daehee, Kim, Dong-Gyun, Kim, Donggook, Kim, Donghyun, Kim, Euna, Kim, Eunchul, Kim, Geewook, Kim, Gyu Ri, Kim, Hanbyul, Kim, Heesu, Kim, Isaac, Kim, Jeonghoon, Kim, Jihye, Kim, Joonghoon, Kim, Minjae, Kim, Minsub, Kim, Pil Hwan, Kim, Sammy, Kim, Seokhun, Kim, Seonghyeon, Kim, Soojin, Kim, Soong, Kim, Soyoon, Kim, Sunyoung, Kim, Taeho, Kim, Wonho, Kim, Yoonsik, Kim, You Jin, Kim, Yuri, Kwon, Beomseok, Kwon, Ohsung, Kwon, Yoo-Hwan, Lee, Anna, Lee, Byungwook, Lee, Changho, Lee, Daun, Lee, Dongjae, Lee, Ha-Ram, Lee, Hodong, Lee, Hwiyeong, Lee, Hyunmi, Lee, Injae, Lee, Jaeung, Lee, Jeongsang, Lee, Jisoo, Lee, Jongsoo, Lee, Joongjae, Lee, Juhan, Lee, Jung Hyun, Lee, Junghoon, Lee, Junwoo, Lee, Se Yun, Lee, Sujin, Lee, Sungjae, Lee, Sungwoo, Lee, Wonjae, Lee, Zoo Hyun, Lim, Jong Kun, Lim, Kun, Lim, Taemin, Na, Nuri, Nam, Jeongyeon, Nam, Kyeong-Min, Noh, Yeonseog, Oh, Biro, Oh, Jung-Sik, Oh, Solgil, Oh, Yeontaek, Park, Boyoun, Park, Cheonbok, Park, Dongju, Park, Hyeonjin, Park, Hyun Tae, Park, Hyunjung, Park, Jihye, Park, Jooseok, Park, Junghwan, Park, Jungsoo, Park, Miru, Park, Sang Hee, Park, Seunghyun, Park, Soyoung, Park, Taerim, Park, Wonkyeong, Ryu, Hyunjoon, Ryu, Jeonghun, Ryu, Nahyeon, Seo, Soonshin, Seo, Suk Min, Shim, Yoonjeong, Shin, Kyuyong, Shin, Wonkwang, Sim, Hyun, Sim, Woongseob, Soh, Hyejin, Son, Bokyong, Son, Hyunjun, Son, Seulah, Song, Chi-Yun, Song, Chiyoung, Song, Ka Yeon, Song, Minchul, Song, Seungmin, Wang, Jisung, Yeo, Yonggoo, Yi, Myeong Yeon, Yim, Moon Bin, Yoo, Taehwan, Yoo, Youngjoon, Yoon, Sungmin, Yoon, Young Jin, Yu, Hangyeol, Yu, Ui Seon, Zuo, Xingdong, Bae, Jeongin, Bae, Joungeun, Cho, Hyunsoo, Cho, Seonghyun, Cho, Yongjin, Choi, Taekyoon, Choi, Yera, Chung, Jiwan, Han, Zhenghui, Heo, Byeongho, Hong, Euisuk, Hwang, Taebaek, Im, Seonyeol, Jegal, Sumin, Jeon, Sumin, Jeong, Yelim, Jeong, Yonghyun, Jiang, Can, Jiang, Juyong, Jin, Jiho, Jo, Ara, Jo, Younghyun, Jung, Hoyoun, Jung, Juyoung, Kang, Seunghyeong, Kim, Dae Hee, Kim, Ginam, Kim, Hangyeol, Kim, Heeseung, Kim, Hyojin, Kim, Hyojun, Kim, Hyun-Ah, Kim, Jeehye, Kim, Jin-Hwa, Kim, Jiseon, Kim, Jonghak, Kim, Jung Yoon, Kim, Rak Yeong, Kim, Seongjin, Kim, Seoyoon, Kim, Sewon, Kim, Sooyoung, Kim, Sukyoung, Kim, Taeyong, Ko, Naeun, Koo, Bonseung, Kwak, Heeyoung, Kwon, Haena, Kwon, Youngjin, Lee, Boram, Lee, Bruce W., Lee, Dagyeong, Lee, Erin, Lee, Euijin, Lee, Ha Gyeong, Lee, Hyojin, Lee, Hyunjeong, Lee, Jeeyoon, Lee, Jeonghyun, Lee, Jongheok, Lee, Joonhyung, Lee, Junhyuk, Lee, Mingu, Lee, Nayeon, Lee, Sangkyu, Lee, Se Young, Lee, Seulgi, Lee, Seung Jin, Lee, Suhyeon, Lee, Yeonjae, Lee, Yesol, Lee, Youngbeom, Lee, Yujin, Li, Shaodong, Liu, Tianyu, Moon, Seong-Eun, Moon, Taehong, Nihlenramstroem, Max-Lasse, Oh, Wonseok, Oh, Yuri, Park, Hongbeen, Park, Hyekyung, Park, Jaeho, Park, Nohil, Park, Sangjin, Ryu, Jiwon, Ryu, Miru, Ryu, Simo, Seo, Ahreum, Seo, Hee, Seo, Kangdeok, Shin, Jamin, Shin, Seungyoun, Sin, Heetae, Wang, Jiangping, Wang, Lei, Xiang, Ning, Xiao, Longxiang, Xu, Jing, Yi, Seonyeong, Yoo, Haanju, Yoo, Haneul, Yoo, Hwanhee, Yu, Liang, Yu, Youngjae, Yuan, Weijie, Zeng, Bo, Zhou, Qian, Cho, Kyunghyun, Ha, Jung-Woo, Park, Joonsuk, Hwang, Jihyun, Kwon, Hyoung Jo, Kwon, Soonyong, Lee, Jungyeon, Lee, Seungho, Lim, Seonghyeon, Noh, Hyunkyung, Choi, Seungho, Lee, Sang-Woo, Lim, Jung Hwa, Sung, Nako
We introduce HyperCLOVA X, a family of large language models (LLMs) tailored to the Korean language and culture, along with competitive capabilities in English, math, and coding. HyperCLOVA X was trained on a balanced mix of Korean, English, and code data, followed by instruction-tuning with high-quality human-annotated datasets while abiding by strict safety guidelines reflecting our commitment to responsible AI. The model is evaluated across various benchmarks, including comprehensive reasoning, knowledge, commonsense, factuality, coding, math, chatting, instruction-following, and harmlessness, in both Korean and English. HyperCLOVA X exhibits strong reasoning capabilities in Korean backed by a deep understanding of the language and cultural nuances. Further analysis of the inherent bilingual nature and its extension to multilingualism highlights the model's cross-lingual proficiency and strong generalization ability to untargeted languages, including machine translation between several language pairs and cross-lingual inference tasks. We believe that HyperCLOVA X can provide helpful guidance for regions or countries in developing their sovereign LLMs.
Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning
Shin, Kyuyong, Kwak, Hanock, Kim, Wonjae, Jeong, Jisu, Jung, Seungjae, Kim, Kyung-Min, Ha, Jung-Woo, Lee, Sang-Woo
Recent studies have proposed unified user modeling frameworks that leverage user behavior data from various applications. Many of them benefit from utilizing users' behavior sequences as plain texts, representing rich information in any domain or system without losing generality. Hence, a question arises: Can language modeling for user history corpus help improve recommender systems? While its versatile usability has been widely investigated in many domains, its applications to recommender systems still remain underexplored. We show that language modeling applied directly to task-specific user histories achieves excellent results on diverse recommendation tasks. Also, leveraging additional task-agnostic user histories delivers significant performance benefits. We further demonstrate that our approach can provide promising transfer learning capabilities for a broad spectrum of real-world recommender systems, even on unseen domains and services.
Scaling Law for Recommendation Models: Towards General-purpose User Representations
Shin, Kyuyong, Kwak, Hanock, Kim, Su Young, Ramstrom, Max Nihlen, Jeong, Jisu, Ha, Jung-Woo, Kim, Kyung-Min
Recent advancement of large-scale pretrained models such as BERT, GPT-3, CLIP, and Gopher, has shown astonishing achievements across various task domains. Unlike vision recognition and language models, studies on general-purpose user representation at scale still remain underexplored. Here we explore the possibility of general-purpose user representation learning by training a universal user encoder at large scales. We demonstrate that the scaling law is present in user representation learning areas, where the training error scales as a power-law with the amount of computation. Our Contrastive Learning User Encoder (CLUE), optimizes task-agnostic objectives, and the resulting user embeddings stretch our expectation of what is possible to do in various downstream tasks. CLUE also shows great transferability to other domains and companies, as performances on an online experiment shows significant improvements in Click-Through-Rate (CTR). Furthermore, we also investigate how the model performance is influenced by the scale factors, such as training data size, model capacity, sequence length, and batch size. Finally, we discuss the broader impacts of CLUE in general.
Hop Sampling: A Simple Regularized Graph Learning for Non-Stationary Environments
Park, Young-Jin, Shin, Kyuyong, Kim, Kyung-Min
Graph representation learning is gaining popularity in a wide range of applications, such as social networks analysis, computational biology, and recommender systems. However, different with positive results from many academic studies, applying graph neural networks (GNNs) in a real-world application is still challenging due to non-stationary environments. The underlying distribution of streaming data changes unexpectedly, resulting in different graph structures (a.k.a., concept drift). Therefore, it is essential to devise a robust graph learning technique so that the model does not overfit to the training graphs. In this work, we present Hop Sampling, a straightforward regularization method that can effectively prevent GNNs from overfishing. The hop sampling randomly selects the number of propagation steps rather than fixing it, and by doing so, it encourages the model to learn meaningful node representation for all intermediate propagation layers and to experience a variety of plausible graphs that are not in the training set. Particularly, we describe the use case of our method in recommender systems, a representative example of the real-world non-stationary case. We evaluated hop sampling on a large-scale real-world LINE dataset and conducted an online A/B/n test in LINE Coupon recommender systems of LINE Wallet Tab. Experimental results demonstrate that the proposed scheme improves the prediction accuracy of GNNs. We observed hop sampling provides 7.97% and 16.93% improvements for NDCG and MAP compared to non-regularized GNN models in our online service. Furthermore, models using hop sampling alleviate the oversmoothing issue in GNNs enabling a deeper model as well as more diversified representation.
Multi-Manifold Learning for Large-scale Targeted Advertising System
Shin, Kyuyong, Park, Young-Jin, Kim, Kyung-Min, Kwon, Sunyoung
Messenger advertisements (ads) give direct and personal user experience yielding high conversion rates and sales. However, people are skeptical about ads and sometimes perceive them as spam, which eventually leads to a decrease in user satisfaction. Targeted advertising, which serves ads to individuals who may exhibit interest in a particular advertising message, is strongly required. The key to the success of precise user targeting lies in learning the accurate user and ad representation in the embedding space. Most of the previous studies have limited the representation learning in the Euclidean space, but recent studies have suggested hyperbolic manifold learning for the distinct projection of complex network properties emerging from real-world datasets such as social networks, recommender systems, and advertising. We propose a framework that can effectively learn the hierarchical structure in users and ads on the hyperbolic space, and extend to the Multi-Manifold Learning. Our method constructs multiple hyperbolic manifolds with learnable curvatures and maps the representation of user and ad to each manifold. The origin of each manifold is set as the centroid of each user cluster. The user preference for each ad is estimated using the distance between two entities in the hyperbolic space, and the final prediction is determined by aggregating the values calculated from the learned multiple manifolds. We evaluate our method on public benchmark datasets and a large-scale commercial messenger system LINE, and demonstrate its effectiveness through improved performance.
Graphs, Entities, and Step Mixture
Shin, Kyuyong, Shin, Wonyoung, Ha, Jung-Woo, Kwon, Sunyoung
Existing approaches for graph neural networks commonly suffer from the oversmoothing issue, regardless of how neighborhoods are aggregated. Most methods also focus on transductive scenarios for fixed graphs, leading to poor generalization for unseen graphs. To address these issues, we propose a new graph neural network that considers both edge-based neighborhood relationships and node-based entity features, i.e. Graph Entities with Step Mixture via random walk (GESM). GESM employs a mixture of various steps through random walk to alleviate the oversmoothing problem, attention to dynamically reflect interrelations depending on node information, and structure-based regularization to enhance embedding representation. With intensive experiments, we show that the proposed GESM achieves state-of-the-art or comparable performances on eight benchmark graph datasets comprising transductive and inductive learning tasks. Furthermore, we empirically demonstrate the significance of considering global information.