Not enough data to create a plot.
Try a different view from the menu above.
Liu, Junzhang
ENTER: Event Based Interpretable Reasoning for VideoQA
Ayyubi, Hammad, Liu, Junzhang, Asgarov, Ali, Hakim, Zaber Ibn Abdul, Sarker, Najibul Haque, Wang, Zhecan, Tang, Chia-Wei, Alomari, Hani, Atabuzzaman, Md., Lin, Xudong, Dyava, Naveen Reddy, Chang, Shih-Fu, Thomas, Chris
In this paper, we present ENTER, an interpretable Video Question Answering (VideoQA) system based on event graphs. Event graphs convert videos into graphical representations, where video events form the nodes and event-event relationships (temporal/causal/hierarchical) form the edges. This structured representation offers many benefits: 1) Interpretable VideoQA via generated code that parses event-graph; 2) Incorporation of contextual visual information in the reasoning process (code generation) via event graphs; 3) Robust VideoQA via Hierarchical Iterative Update of the event graphs. Existing interpretable VideoQA systems are often top-down, disregarding low-level visual information in the reasoning plan generation, and are brittle. While bottom-up approaches produce responses from visual data, they lack interpretability. Experimental results on NExT-QA, IntentQA, and EgoSchema demonstrate that not only does our method outperform existing top-down approaches while obtaining competitive performance against bottom-up approaches, but more importantly, offers superior interpretability and explainability in the reasoning process.
PuzzleGPT: Emulating Human Puzzle-Solving Ability for Time and Location Prediction
Ayyubi, Hammad, Feng, Xuande, Liu, Junzhang, Lin, Xudong, Wang, Zhecan, Chang, Shih-Fu
The task of predicting time and location from images is challenging and requires complex human-like puzzle-solving ability over different clues. In this work, we formalize this ability into core skills and implement them using different modules in an expert pipeline called PuzzleGPT. PuzzleGPT consists of a perceiver to identify visual clues, a reasoner to deduce prediction candidates, a combiner to combinatorially combine information from different clues, a web retriever to get external knowledge if the task can't be solved locally, and a noise filter for robustness. This results in a zero-shot, interpretable, and robust approach that records state-of-the-art performance on two datasets -- TARA and WikiTilo. PuzzleGPT outperforms large VLMs such as BLIP-2, InstructBLIP, LLaVA, and even GPT-4V, as well as automatically generated reasoning pipelines like VisProg, by at least 32% and 38%, respectively. It even rivals or surpasses finetuned models.