Goto

Collaborating Authors

 minerv



Active Video Perception: Iterative Evidence Seeking for Agentic Long Video Understanding

Wang, Ziyang, Zhou, Honglu, Wang, Shijie, Li, Junnan, Xiong, Caiming, Savarese, Silvio, Bansal, Mohit, Ryoo, Michael S., Niebles, Juan Carlos

arXiv.org Artificial Intelligence

Long video understanding (LVU) is challenging because answering real-world queries often depends on sparse, temporally dispersed cues buried in hours of mostly redundant and irrelevant content. While agentic pipelines improve video reasoning capabilities, prevailing frameworks rely on a query-agnostic captioner to perceive video information, which wastes computation on irrelevant content and blurs fine-grained temporal and spatial information. Motivated by active perception theory, we argue that LVU agents should actively decide what, when, and where to observe, and continuously assess whether the current observation is sufficient to answer the query. We present Active Video Perception (AVP), an evidence-seeking framework that treats the video as an interactive environment and acquires compact, queryrelevant evidence directly from pixels. Concretely, AVP runs an iterative plan-observe-reflect process with MLLM agents. In each round, a planner proposes targeted video interactions, an observer executes them to extract time-stamped evidence, and a reflector evaluates the sufficiency of the evidence for the query, either halting with an answer or triggering further observation. Across five LVU benchmarks, AVP achieves highest performance with significant improvements. Notably, AVP outperforms the best agentic method by 5.7% in average accuracy while only requires 18.4% inference time and 12.4% input tokens.



0c72cb7ee1512f800abe27823a792d03-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all the reviewers for their comments about the novelty and significance of the work. Below we address reviewers' two common comments. We agree that we didn't Reviewer 1: We really appreciate your thoughtful and detailed comments. Please find in the following our responses. Will be appended to Table 2 of paper.


Collaborative Policy Learning for Open Knowledge Graph Reasoning

Fu, Cong, Chen, Tong, Qu, Meng, Jin, Woojeong, Ren, Xiang

arXiv.org Artificial Intelligence

In recent years, there has been a surge of interests in interpretable graph reasoning methods. However, these models often suffer from limited performance when working on sparse and incomplete graphs, due to the lack of evidential paths that can reach target entities. Here we study open knowledge graph reasoning---a task that aims to reason for missing facts over a graph augmented by a background text corpus. A key challenge of the task is to filter out "irrelevant" facts extracted from corpus, in order to maintain an effective search space during path inference. We propose a novel reinforcement learning framework to train two collaborative agents jointly, i.e., a multi-hop graph reasoner and a fact extractor. The fact extraction agent generates fact triples from corpora to enrich the graph on the fly; while the reasoning agent provides feedback to the fact extractor and guides it towards promoting facts that are helpful for the interpretable reasoning. Experiments on two public datasets demonstrate the effectiveness of the proposed approach. Source code and datasets used in this paper can be downloaded at https://github.com/shanzhenren/CPL