Zero-Shot Temporal Interaction Localization for Egocentric Videos
Zhang, Erhang, Ma, Junyi, Zheng, Yin-Dong, Zhou, Yixuan, Wang, Hesheng
–arXiv.org Artificial Intelligence
Abstract-- Locating human-object interaction (HOI) actions within video serves as the foundation for multiple downstream tasks, such as human behavior analysis and human-robot skill transfer . Current temporal action localization methods typically rely on annotated action and object categories of interactions for optimization, which leads to domain bias and low deployment efficiency. Although some recent works have achieved zero-shot temporal action localization (ZS-T AL) with large vision-language models (VLMs), their coarse-grained estimations and open-loop pipelines hinder further performance improvements for temporal interaction localization (TIL). T o address these issues, we propose a novel zero-shot TIL approach dubbed EgoLoc to locate the timings of grasp actions for human-object interaction in egocentric videos. EgoLoc introduces a self-adaptive sampling strategy to generate reasonable visual prompts for VLM reasoning. In addition, EgoLoc generates closed-loop feedback from visual and dynamic cues to further refine the localization results. Comprehensive experiments on the publicly available dataset and our newly proposed benchmark demonstrate that EgoLoc achieves better temporal interaction localization for egocentric videos compared to state-of-the-art baselines. We will release our code and relevant data as open-source at https://github.com/IRMVLab/EgoLoc.
arXiv.org Artificial Intelligence
Nov-17-2025
- Country:
- Asia
- North America > United States (0.04)
- Genre:
- Research Report (0.40)
- Industry:
- Energy (0.37)
- Technology: