Goto

Collaborating Authors

 momentdiff



MomentDiff: Generative Video Moment Retrieval from Random to Real

Neural Information Processing Systems

To achieve this goal, we provide a generative diffusion-based framework called MomentDiff, which simulates a typical human retrieval process from random browsing to gradual localization. Specifically, we first diffuse the real span to random noise, and learn to denoise the random noise to the original span with the guidance of similarity between text and video.


MomentDiff: Generative Video Moment Retrieval from Random to Real (Supplementary Material)

Neural Information Processing Systems

Each video is annotated with an average of 2.4 moments, with The dataset contains a total of 10,310 queries with 18,367 annotated moments. Then, we design the dataset Charades-ST A-Mom based on the span's end time Algorithm 1 provides the pseudo-code of MomentDiff Training in a PyTorch-like style. Inference efficiency is critical for machine learning models. We report R1@0.5, R1@0.7 and MAP Figure 1 shows the performance fluctuation of the model on the Charades-ST A dataset. Glove; SF+C, C;) to organize experiments. Therefore we adopt DDIM as the default technology.


MomentDiff: Generative Video Moment Retrieval from Random to Real

Neural Information Processing Systems

To achieve this goal, we provide a generative diffusion-based framework called MomentDiff, which simulates a typical human retrieval process from random browsing to gradual localization. Specifically, we first diffuse the real span to random noise, and learn to denoise the random noise to the original span with the guidance of similarity between text and video.