Goto

Collaborating Authors

 charade-st



MomentDiff: Generative Video Moment Retrieval from Random to Real

Neural Information Processing Systems

To achieve this goal, we provide a generative diffusion-based framework called MomentDiff, which simulates a typical human retrieval process from random browsing to gradual localization. Specifically, we first diffuse the real span to random noise, and learn to denoise the random noise to the original span with the guidance of similarity between text and video.




SMART: Shot-Aware Multimodal Video Moment Retrieval with Audio-Enhanced MLLM

Yu, An, Lu, Weiheng, Li, Jian, Zhang, Zhenfei, Shen, Yunhang, Ye, Felix X. -F., Chang, Ming-Ching

arXiv.org Artificial Intelligence

Abstract--Video Moment Retrieval is a task in video understanding that aims to localize a specific temporal segment in an untrimmed video based on a natural language query. Despite recent progress in moment retrieval from videos using both traditional techniques and Multimodal Large Language Models (MLLM), most existing methods still rely on coarse temporal understanding and a single visual modality, limiting performance on complex videos. T o address this, we introduce Shot-aware Multimodal Audio-enhanced Retrieval of Temporal Segments (SMART), an MLLM-based framework that integrates audio cues and leverages shot-level temporal structure. SMART enriches multimodal representations by combining audio and visual features while applying Shot-aware T oken Compression, which selectively retains high-information tokens within each shot to reduce redundancy and preserve fine-grained temporal details. We also refine prompt design to better utilize audio-visual cues. Evaluations on Charades-ST A and QVHighlights show that SMART achieves significant improvements over state-of-the-art methods, including a 1.61% increase in R1@0.5 and 2.59% gain in R1@0.7 on Charades-ST A. Index T erms--Video Moment Retrieval, T emporal Localization, Audio-Visual Representation Learning, Shot-aware T oken Compression, Shot Boundary Detection, T emporal Reasoning, Multimodal Large Language Models (MLLM), Video Understanding. ITH the rapid growth of video content shared and created on the internet and social media, the ability to efficiently analyze such content has become increasingly important. One key task in this domain is moment retrieval-- the process of identifying the specific temporal segment within a video that best corresponds to a given natural language query.


MomentDiff: Generative Video Moment Retrieval from Random to Real (Supplementary Material)

Neural Information Processing Systems

Each video is annotated with an average of 2.4 moments, with The dataset contains a total of 10,310 queries with 18,367 annotated moments. Then, we design the dataset Charades-ST A-Mom based on the span's end time Algorithm 1 provides the pseudo-code of MomentDiff Training in a PyTorch-like style. Inference efficiency is critical for machine learning models. We report R1@0.5, R1@0.7 and MAP Figure 1 shows the performance fluctuation of the model on the Charades-ST A dataset. Glove; SF+C, C;) to organize experiments. Therefore we adopt DDIM as the default technology.


MomentDiff: Generative Video Moment Retrieval from Random to Real

Neural Information Processing Systems

To achieve this goal, we provide a generative diffusion-based framework called MomentDiff, which simulates a typical human retrieval process from random browsing to gradual localization. Specifically, we first diffuse the real span to random noise, and learn to denoise the random noise to the original span with the guidance of similarity between text and video.



ResidualViT for Efficient Temporally Dense Video Encoding

Soldan, Mattia, Heilbron, Fabian Caba, Ghanem, Bernard, Sivic, Josef, Russell, Bryan

arXiv.org Artificial Intelligence

Several video understanding tasks, such as natural language temporal video grounding, temporal activity localization, and audio description generation, require "temporally dense" reasoning over frames sampled at high temporal resolution. However, computing frame-level features for these tasks is computationally expensive given the temporal resolution requirements. In this paper, we make three contributions to reduce the cost of computing features for temporally dense tasks. First, we introduce a vision transformer (ViT) architecture, dubbed ResidualViT, that leverages the large temporal redundancy in videos to efficiently compute temporally dense frame-level features. Our architecture incorporates (i) learnable residual connections that ensure temporal consistency across consecutive frames and (ii) a token reduction module that enhances processing speed by selectively discarding temporally redundant information while reusing weights of a pretrained foundation model. Second, we propose a lightweight distillation strategy to approximate the frame-level features of the original foundation model. Finally, we evaluate our approach across four tasks and five datasets, in both zero-shot and fully supervised settings, demonstrating significant reductions in computational cost (up to 60%) and improvements in inference speed (up to 2.5x faster), all while closely approximating the accuracy of the original foundation model.