object token
Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens
Recent action recognition models have achieved impressive results by integrating objects, their locations and interactions. However, obtaining dense structured annotations for each frame is tedious and time-consuming, making these methods expensive to train and less scalable. At the same time, if a small set of annotated images is available, either within or outside the domain of interest, how could we leverage these for a video downstream task? We propose a learning framework StructureViT (SViT for short), which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model. SViT relies on two key insights.
Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens
Recent action recognition models have achieved impressive results by integrating objects, their locations and interactions. However, obtaining dense structured annotations for each frame is tedious and time-consuming, making these methods expensive to train and less scalable. At the same time, if a small set of annotated images is available, either within or outside the domain of interest, how could we leverage these for a video downstream task? We propose a learning framework StructureViT (SViT for short), which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model. SViT relies on two key insights.
VideoOrion: Tokenizing Object Dynamics in Videos
Feng, Yicheng, Li, Yijiang, Zhang, Wanpeng, Zheng, Sipeng, Lu, Zongqing
VideoOrion not only offers a more natural and efficient way to derive compact, disentangled semantic representations We present VideoOrion, a Video Large Language Model but also enables explicit object modeling of video (Video-LLM) that explicitly captures the key semantic information content with minimal computational cost. Moreover, the introduced in videos--the spatial-temporal dynamics of objects object tokens naturally allow VideoOrion to accomplish throughout the videos. VideoOrion employs expert vision video-based referring tasks. Experimental results models to extract object dynamics through a detectsegment-track show that VideoOrion can learn to make good use of the pipeline, encoding them into a set of object object tokens, and achieves competitive results on both general tokens by aggregating spatial-temporal object features. Our video question answering and video-based referring method addresses the persistent challenge in Video-LLMs benchmarks. of efficiently compressing high-dimensional video data into semantic tokens that are comprehensible to LLMs.
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Africa > Angola > Namibe Province > South Atlantic Ocean (0.04)
Diagnosing Vision-and-Language Navigation: What Really Matters
Zhu, Wanrong, Qi, Yuankai, Narayana, Pradyumna, Sone, Kazoo, Basu, Sugato, Wang, Xin Eric, Wu, Qi, Eckstein, Miguel, Wang, William Yang
Vision-and-language navigation (VLN) is a multimodal task where an agent follows natural language instructions and navigates in visual environments. Multiple setups have been proposed, and researchers apply new model architectures or training techniques to boost navigation performance. However, recent studies witness a slow-down in the performance improvements in both indoor and outdoor VLN tasks, and the agents' inner mechanisms for making navigation decisions remain unclear. To the best of our knowledge, the way the agents perceive the multimodal input is under-studied and clearly needs investigations. In this work, we conduct a series of diagnostic experiments to unveil agents' focus during navigation. Results show that indoor navigation agents refer to both object tokens and direction tokens in the instruction when making decisions. In contrast, outdoor navigation agents heavily rely on direction tokens and have a poor understanding of the object tokens. Furthermore, instead of merely staring at surrounding objects, indoor navigation agents can set their sights on objects further from the current viewpoint. When it comes to vision-and-language alignments, many models claim that they are able to align object tokens with certain visual targets, but we cast doubt on the reliability of such alignments.
- North America > United States (1.00)
- Europe (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Object-Oriented Architecture (0.34)