To See or To Read: User Behavior Reasoning in Multimodal LLMs
Dong, Tianning, Ma, Luyi, Vasudevan, Varun, Cho, Jason, Kumar, Sushant, Achan, Kannan
–arXiv.org Artificial Intelligence
Multimodal Large Language Models (MLLMs) are reshaping how modern agentic systems reason over sequential user-behavior data. However, whether textual or image representations of user behavior data are more effective for maximizing MLLM performance remains underexplored. We present \texttt{BehaviorLens}, a systematic benchmarking framework for assessing modality trade-offs in user-behavior reasoning across six MLLMs by representing transaction data as (1) a text paragraph, (2) a scatter plot, and (3) a flowchart. Using a real-world purchase-sequence dataset, we find that when data is represented as images, MLLMs next-purchase prediction accuracy is improved by 87.5% compared with an equivalent textual representation without any additional computational cost.
arXiv.org Artificial Intelligence
Nov-7-2025
- Country:
- Asia > Myanmar
- Tanintharyi Region > Dawei (0.04)
- North America > United States
- California > Santa Clara County > Sunnyvale (0.04)
- Asia > Myanmar
- Genre:
- Research Report (0.50)
- Industry:
- Consumer Products & Services (0.46)
- Health & Medicine (0.47)
- Technology: