LAMPER: LanguAge Model and Prompt EngineeRing for zero-shot time series classification
Du, Zhicheng, Xie, Zhaotian, Tong, Yan, Qin, Peiwu
–arXiv.org Artificial Intelligence
This study constructs the LanguAge Model with Prompt EngineeRing (LAMPER) framework, designed to systematically evaluate the adaptability of pre-trained language models (PLMs) in accommodating diverse prompts and their integration in zero-shot time series (TS) classification. Our findings indicate that the feature representation capacity of LAMPER is influenced by the maximum input token threshold imposed by PLMs. The exploration of time series (TS)-based tasks constitutes a research-intensive domain with significant implications with wide-ranging implications in diverse professional fields, including healthcare, finance, and energy (Zhang et al., 2022; Zheng et al., 2023; Santoro et al., 2023). Within the realms of natural language processing (NLP), the dynamic landscape witnesses the rapid evolution of pre-trained language models (PLMs) and prompt engineering (Min et al., 2023; Wei et al., 2022). These advancements underscore their commendable capacity to adeptly execute an extensive array of tasks, particularly under few-shot or even zero-shot conditions (Brown et al., 2020; Webson & Pavlick, 2022).
arXiv.org Artificial Intelligence
Mar-23-2024