Jeon, Sungho
REP: Resource-Efficient Prompting for On-device Continual Learning
Jeon, Sungho, Ma, Xinyue, Kim, Kwang In, Jeon, Myeongjae
On-device continual learning (CL) requires the co-optimization of model accuracy and resource efficiency to be practical. This is extremely challenging because it must preserve accuracy while learning new tasks with continuously drifting data and maintain both high energy and memory efficiency to be deployable on real-world devices. Typically, a CL method leverages one of two types of backbone networks: CNN or ViT. It is commonly believed that CNN-based CL excels in resource efficiency, whereas ViT-based CL is superior in model performance, making each option attractive only for a single aspect. In this paper, we revisit this comparison while embracing powerful pre-trained ViT models of various sizes, including ViT-Ti (5.8M parameters). Our detailed analysis reveals that many practical options exist today for making ViT-based methods more suitable for on-device CL, even when accuracy, energy, and memory are all considered. To further expand this impact, we introduce REP, which improves resource efficiency specifically targeting prompt-based rehearsal-free methods. Our key focus is on avoiding catastrophic trade-offs with accuracy while trimming computational and memory costs throughout the training process. We achieve this by exploiting swift prompt selection that enhances input data using a carefully provisioned model, and by developing two novel algorithms-adaptive token merging (AToM) and adaptive layer dropping (ALD)-that optimize the prompt updating stage. In particular, AToM and ALD perform selective skipping across the data and model-layer dimensions without compromising task-specific features in vision transformer models. Extensive experiments on three image classification datasets validate REP's superior resource efficiency over current state-of-the-art methods.
Attention or Convolution: Transformer Encoders in Audio Language Models for Inference Efficiency
Jeon, Sungho, Yeh, Ching-Feng, Inan, Hakan, Hsu, Wei-Ning, Rungta, Rashi, Mehdad, Yashar, Bikel, Daniel
In this paper, we show that a simple self-supervised pre-trained audio model can achieve comparable inference efficiency to more complicated pre-trained models with speech transformer encoders. These speech transformers rely on mixing convolutional modules with self-attention modules. They achieve state-of-the-art performance on ASR with top efficiency. We first show that employing these speech transformers as an encoder significantly improves the efficiency of pre-trained audio models as well. However, our study shows that we can achieve comparable efficiency with advanced self-attention solely. We demonstrate that this simpler approach is particularly beneficial with a low-bit weight quantization technique of a neural network to improve efficiency. We hypothesize that it prevents propagating the errors between different quantized modules compared to recent speech transformers mixing quantized convolution and the quantized self-attention modules.
Don’t Be Spoiled by Your Friends: Spoiler Detection in TV Program Tweets
Jeon, Sungho (The Attached Institute of Electronics and Telecommunications Research Institute) | Kim, Sungchul (POSTECH) | Yu, Hwanjo (POSTECH)
Providing a convenient mechanism for accessing the Internet, smartphones have led to the rapid growth of Social Networking Services (SNSs) such as Twitter and have served as a major platform for SNSs. Nowadays, people are able to check conveniently the SNS messages posted by their friends and followers via their smartphones. As a consequence, people are exposed to spoilers of TV programs that they follow. So far, there are two previous works that explored the detection of spoilers in texts, not SNS: (1) keyword matching method and (2) machine-learning method based on Latent Dirichlet Allocation (LDA). The keyword matching method evaluates most tweets as spoilers; hence its poor recall performance. The other method based on LDA, although successful on large text, works poorly on short segments of text such as those found on Twitter and evaluates most tweets as non-spoilers. This paper presents four features that are significant in the classification of spoiler tweets. Using those features, we classified spoiler tweets pertaining to a reality TV show (“Dancing with the Stars”). We experimentally compared our method with previous methods, with our method achieving substantially higher precision compared to the keyword matching and LDA-based methods while maintaining comparable recalls.