Unhackable Temporal Rewarding for Scalable Video MLLMs
Yu, En, Lin, Kangheng, Zhao, Liang, Wei, Yana, Zhu, Zining, Wei, Haoran, Sun, Jianjian, Ge, Zheng, Zhang, Xiangyu, Wang, Jingyu, Tao, Wenbing
–arXiv.org Artificial Intelligence
In the pursuit of superior video-processing MLLMs, we have encountered a perplexing paradox: the "anti-scaling law", where more data and larger models lead to worse performance. This study unmasks the culprit: "temporal hacking", a phenomenon where models shortcut by fixating on select frames, missing the full video narrative. In this work, we systematically establish a comprehensive theory of temporal hacking, defining it from a reinforcement learning perspective, introducing the Temporal Perplexity (TPL) score to assess this misalignment, and proposing the Unhackable Temporal Rewarding (UTR) framework to mitigate the temporal hacking. Both theoretically and empirically, TPL proves to be a reliable indicator of temporal modeling quality, correlating strongly with frame activation patterns. Extensive experiments reveal that UTR not only counters temporal hacking but significantly elevates video comprehension capabilities. This work not only advances video-AI systems but also illuminates the critical importance of aligning proxy rewards with true objectives in MLLM development.
arXiv.org Artificial Intelligence
Feb-17-2025
- Country:
- Europe > Switzerland (0.28)
- Genre:
- Research Report (0.82)
- Industry:
- Education (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.94)
- Reinforcement Learning (0.66)
- Natural Language
- Chatbot (0.94)
- Large Language Model (1.00)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence