Kim, Sungwoong
VisEscape: A Benchmark for Evaluating Exploration-driven Decision-making in Virtual Escape Rooms
Lim, Seungwon, Kim, Sungwoong, Yu, Jihwan, Lee, Sungjae, Chung, Jiwan, Yu, Youngjae
Escape rooms present a unique cognitive challenge that demands exploration-driven planning: players should actively search their environment, continuously update their knowledge based on new discoveries, and connect disparate clues to determine which elements are relevant to their objectives. Motivated by this, we introduce VisEscape, a benchmark of 20 virtual escape rooms specifically designed to evaluate AI models under these challenging conditions, where success depends not only on solving isolated puzzles but also on iteratively constructing and refining spatial-temporal knowledge of a dynamically changing environment. On VisEscape, we observed that even state-of-the-art multimodal models generally fail to escape the rooms, showing considerable variation in their levels of progress and trajectories. To address this issue, we propose VisEscaper, which effectively integrates Memory, Feedback, and ReAct modules, demonstrating significant improvements by performing 3.7 times more effectively and 5.0 times more efficiently on average.
EgoSpeak: Learning When to Speak for Egocentric Conversational Agents in the Wild
Kim, Junhyeok, Kim, Min Soo, Chung, Jiwan, Cho, Jungbin, Kim, Jisoo, Kim, Sungwoong, Sim, Gyeongbo, Yu, Youngjae
Predicting when to initiate speech in real-world environments remains a fundamental challenge for conversational agents. We introduce EgoSpeak, a novel framework for real-time speech initiation prediction in egocentric streaming video. By modeling the conversation from the speaker's first-person viewpoint, EgoSpeak is tailored for human-like interactions in which a conversational agent must continuously observe its environment and dynamically decide when to talk. Our approach bridges the gap between simplified experimental setups and complex natural conversations by integrating four key capabilities: (1) first-person perspective, (2) RGB processing, (3) online processing, and (4) untrimmed video processing. We also present YT-Conversation, a diverse collection of in-the-wild conversational videos from YouTube, as a resource for large-scale pretraining. Experiments on EasyCom and Ego4D demonstrate that EgoSpeak outperforms random and silence-based baselines in real time. Our results also highlight the importance of multimodal input and context length in effectively deciding when to speak.
Mol-LLM: Generalist Molecular LLM with Improved Graph Utilization
Lee, Chanhui, Song, Yuheon, Jeong, YongJun, Ko, Hanbum, Hormazabal, Rodrigo, Han, Sehui, Bae, Kyunghoon, Lim, Sungbin, Kim, Sungwoong
Recent advances in Large Language Models (LLMs) have motivated the development of general LLMs for molecular tasks. While several studies have demonstrated that fine-tuned LLMs can achieve impressive benchmark performances, they are far from genuine generalist molecular LLMs due to a lack of fundamental understanding of molecular structure. Specifically, when given molecular task instructions, LLMs trained with naive next-token prediction training assign similar likelihood scores to both original and negatively corrupted molecules, revealing their lack of molecular structure understanding that is crucial for reliable and general molecular LLMs. To overcome this limitation and obtain a true generalist molecular LLM, we introduce a novel multi-modal training method based on a thorough multi-modal instruction tuning as well as a molecular structure preference optimization between chosen and rejected graphs. On various molecular benchmarks, the proposed generalist molecular LLM, called Mol-LLM, achieves state-of-the-art performances among generalist LLMs on most tasks, at the same time, surpassing or comparable to state-of-the-art specialist LLMs. Moreover, Mol-LLM also shows superior generalization performances in reaction prediction tasks, demonstrating the effect of the molecular structure understanding for generalization perspective.
CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction
Choi, Suhwan, Cho, Yongjun, Kim, Minchan, Jung, Jaeyoon, Joe, Myunchul, Park, Yubeen, Kim, Minseo, Kim, Sungwoong, Lee, Sungjae, Park, Hwiseong, Chung, Jiwan, Yu, Youngjae
Real-life robot navigation involves more than just reaching a destination; it requires optimizing movements while addressing scenario-specific goals. An intuitive way for humans to express these goals is through abstract cues like verbal commands or rough sketches. Such human guidance may lack details or be noisy. Nonetheless, we expect robots to navigate as intended. For robots to interpret and execute these abstract instructions in line with human expectations, they must share a common understanding of basic navigation concepts with humans. To this end, we introduce CANVAS, a novel framework that combines visual and linguistic instructions for commonsense-aware navigation. Its success is driven by imitation learning, enabling the robot to learn from human navigation behavior. We present COMMAND, a comprehensive dataset with human-annotated navigation results, spanning over 48 hours and 219 km, designed to train commonsense-aware navigation systems in simulated environments. Our experiments show that CANVAS outperforms the strong rule-based system ROS NavStack across all environments, demonstrating superior performance with noisy instructions. Notably, in the orchard environment, where ROS NavStack records a 0% total success rate, CANVAS achieves a total success rate of 67%. CANVAS also closely aligns with human demonstrations and commonsense constraints, even in unseen environments. Furthermore, real-world deployment of CANVAS showcases impressive Sim2Real transfer with a total success rate of 69%, highlighting the potential of learning from human demonstrations in simulated environments for real-world applications.
Scalable Multi-Task Transfer Learning for Molecular Property Prediction
Lee, Chanhui, Jeong, Dae-Woong, Ko, Sung Moon, Lee, Sumin, Kim, Hyunseung, Yim, Soorin, Han, Sehui, Kim, Sungwoong, Lim, Sungbin
Molecules have a number of distinct properties whose importance and application vary. Often, in reality, labels for some properties are hard to achieve despite their practical importance. A common solution to such data scarcity is to use models of good generalization with transfer learning. This involves domain experts for designing source and target tasks whose features are shared. However, this approach has limitations: i). Difficulty in accurate design of source-target task pairs due to the large number of tasks, and ii). corresponding computational burden verifying many trials and errors of transfer learning design, thereby iii). constraining the potential of foundation modeling of multi-task molecular property prediction. We address the limitations of the manual design of transfer learning via data-driven bi-level optimization. The proposed method enables scalable multi-task transfer learning for molecular property prediction by automatically obtaining the optimal transfer ratios. Empirically, the proposed method improved the prediction performance of 40 molecular properties and accelerated training convergence.
Hexa: Self-Improving for Knowledge-Grounded Dialogue System
Jo, Daejin, Nam, Daniel Wontae, Han, Gunsoo, On, Kyoung-Woon, Kwon, Taehwan, Rho, Seungeun, Kim, Sungwoong
A common practice in knowledge-grounded dialogue generation is to explicitly utilize intermediate steps (e.g., web-search, memory retrieval) with modular approaches. However, data for such steps are often inaccessible compared to those of dialogue responses as they are unobservable in an ordinary dialogue. To fill in the absence of these data, we develop a self-improving method to improve the generative performances of intermediate steps without the ground truth data. In particular, we propose a novel bootstrapping scheme with a guided prompt and a modified loss function to enhance the diversity of appropriate self-generated responses. Through experiments on various benchmark datasets, we empirically demonstrate that our method successfully leverages a self-improving mechanism in generating intermediate and final responses and improves the performances on the task of knowledge-grounded dialogue generation. Along with the progress of Language Model (LM) pretraining, open-domain dialogue models have evolved to leverage the advantage of the transformer architecture's generalization ability (Zhang et al., 2019; Freitas et al., 2020; Roller et al., 2021; Xu et al., 2022a; Shuster et al., 2022b; Thoppilan et al., 2022). While model scaling also improves the dialogue quality (Freitas et al., 2020) as seen in large LMs, relying on sole LMs casts limitations such as hallucination and the lack of faithfulness by outdated training data (Brown et al., 2020; Thoppilan et al., 2022; Chowdhery et al., 2022). In order to overcome the limitations, prior works have adopted a modular design where multiple modules generate intermediate texts (e.g., to retrieve documents) before the final response (Lewis et al., 2020; Adolphs et al., 2021; Zhang et al., 2021; Shuster et al., 2022a). Among them, Komeili et al. (2022); Shuster et al. (2022b) have shown promising results in dialogue generation. Specifically, they adopted a modular design to integrate external knowledge (e.g., internet) and internal knowledge (e.g., memory) in dialogue models. For example, in Komeili et al. (2022), a LM first decides whether to access a knowledge in a form of text generation. Upon deciding to access knowledge, the LM generates an appropriate query for knowledge retrieval from external sources such as search engines. Then, the LM generates a response based on extracted knowledge from the accessed data. See Figure 2 of Appendix A for an illustrative example. Regarding each intermediate phase as a separate module, a convenient method of training these modules would be to apply supervised learning on each module using individual datasets (Dinan et al., 2019; Shuster et al., 2022a; Glass et al., 2022; Shuster et al., 2022b).
Effortless Integration of Memory Management into Open-Domain Conversation Systems
Choi, Eunbi, On, Kyoung-Woon, Han, Gunsoo, Kim, Sungwoong, Nam, Daniel Wontae, Jo, Daejin, Rho, Seung Eun, Kwon, Taehwan, Seo, Minjoon
Open-domain conversation systems integrate multiple conversation skills into a single system through a modular approach. One of the limitations of the system, however, is the absence of management capability for external memory. In this paper, we propose a simple method to improve BlenderBot3 by integrating memory management ability into it. Since no training data exists for this purpose, we propose an automating dataset creation for memory management. Our method 1) requires little cost for data construction, 2) does not affect performance in other tasks, and 3) reduces external memory. We show that our proposed model BlenderBot3-M^3, which is multi-task trained with memory management, outperforms BlenderBot3 with a relative 4% performance gain in terms of F1 score.
MAGVLT: Masked Generative Vision-and-Language Transformer
Kim, Sungwoong, Jo, Daejin, Lee, Donghoon, Kim, Jongmin
While generative modeling on multimodal image-text data has been actively developed with large-scale paired datasets, there have been limited attempts to generate both image and text data by a single model rather than a generation of one fixed modality conditioned on the other modality. In this paper, we explore a unified generative vision-and-language (VL) model that can produce both images and text sequences. Especially, we propose a generative VL transformer based on the non-autoregressive mask prediction, named MAGVLT, and compare it with an autoregressive generative VL transformer (ARGVLT). In comparison to ARGVLT, the proposed MAGVLT enables bidirectional context encoding, fast decoding by parallel token predictions in an iterative refinement, and extended editing capabilities such as image and text infilling. For rigorous training of our MAGVLT with image-text pairs from scratch, we combine the image-to-text, text-to-image, and joint image-and-text mask prediction tasks. Moreover, we devise two additional tasks based on the step-unrolled mask prediction and the selective prediction on the mixture of two image-text pairs. Experimental results on various downstream generation tasks of VL benchmarks show that our MAGVLT outperforms ARGVLT by a large margin even with significant inference speedup. Particularly, MAGVLT achieves competitive results on both zero-shot image-to-text and text-to-image generation tasks from MS-COCO by one moderate-sized model (fewer than 500M parameters) even without the use of monomodal data and networks.
Contrastive Regularization for Semi-Supervised Learning
Lee, Doyup, Kim, Sungwoong, Kim, Ildoo, Cheon, Yeongjae, Cho, Minsu, Han, Wook-Shin
Consistency regularization on label predictions becomes a fundamental technique in semi-supervised learning, but it still requires a large number of training iterations for high performance. In this study, we analyze that the consistency regularization restricts the propagation of labeling information due to the exclusion of samples with unconfident pseudo-labels in the model updates. Then, we propose contrastive regularization to improve both efficiency and accuracy of the consistency regularization by well-clustered features of unlabeled data. In specific, after strongly augmented samples are assigned to clusters by their pseudo-labels, our contrastive regularization updates the model so that the features with confident pseudo-labels aggregate the features in the same cluster, while pushing away features in different clusters. As a result, the information of confident pseudo-labels can be effectively propagated into more unlabeled samples during training by the well-clustered features. On benchmarks of semi-supervised learning tasks, our contrastive regularization improves the previous consistency-based methods and achieves state-of-the-art results, especially with fewer training iterations. Our method also shows robust performance on open-set semi-supervised learning where unlabeled data includes out-of-distribution samples.
Hybrid Generative-Contrastive Representation Learning
Kim, Saehoon, Kim, Sungwoong, Lee, Juho
Unsupervised representation learning has recently received lots of interest due to its powerful generalizability through effectively leveraging large-scale unlabeled data. There are two prevalent approaches for this, contrastive learning and generative pre-training, where the former learns representations from instance-wise discrimination tasks and the latter learns them from estimating the likelihood. These seemingly orthogonal approaches have their own strengths and weaknesses. Contrastive learning tends to extract semantic information and discards details irrelevant for classifying objects, making the representations effective for discriminative tasks while degrading robustness to out-of-distribution data. On the other hand, the generative pre-training directly estimates the data distribution, so the representations tend to be robust but not optimal for discriminative tasks. In this paper, we show that we could achieve the best of both worlds by a hybrid training scheme. Specifically, we demonstrated that a transformer-based encoder-decoder architecture trained with both contrastive and generative losses can learn highly discriminative and robust representations without hurting the generative performance. We extensively validate our approach on various tasks.