LLMs Meet Cross-Modal Time Series Analytics: Overview and Directions
Liu, Chenxi, Miao, Hao, Long, Cheng, Zhao, Yan, Li, Ziyue, Kalnis, Panos
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have emerged as a promising paradigm for time series analytics, leveraging their massive parameters and the shared sequential nature of textual and time series data. However, a cross-modality gap exists between time series and textual data, as LLMs are pre-trained on textual corpora and are not inherently optimized for time series. In this tutorial, we provide an up-to-date overview of LLM-based cross-modal time series analytics. We introduce a taxonomy that classifies existing approaches into three groups based on cross-modal modeling strategies, e.g., conversion, alignment, and fusion, and then discuss their applications across a range of downstream tasks. In addition, we summarize several open challenges. This tutorial aims to expand the practical application of LLMs in solving real-world problems in cross-modal time series analytics while balancing effectiveness and efficiency. Participants will gain a thorough understanding of current advancements, methodologies, and future research directions in cross-modal time series analytics.
arXiv.org Artificial Intelligence
Jul-16-2025
- Country:
- Asia
- China
- Hong Kong (0.05)
- Jiangsu Province (0.04)
- Japan > Honshū
- Kansai > Osaka Prefecture > Osaka (0.05)
- Middle East > Saudi Arabia (0.04)
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Singapore (0.05)
- China
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Germany > Bavaria
- North America > United States
- New York > New York County > New York City (0.04)
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Asia
- Genre:
- Industry:
- Education (0.46)
- Technology: