Goto

Collaborating Authors

 ego4d





Supplementary material

Neural Information Processing Systems

This section contains supplementary material to support the main paper text. The contents include: A. Videos illustrating our approach. Additional walkthrough collection details to supplement Sec. 4 (simulators). Alternate local state task formulations compared to ours in Sec. Additional attention visualization results to supplement Figure 1.




EgoSpeak: Learning When to Speak for Egocentric Conversational Agents in the Wild

Kim, Junhyeok, Kim, Min Soo, Chung, Jiwan, Cho, Jungbin, Kim, Jisoo, Kim, Sungwoong, Sim, Gyeongbo, Yu, Youngjae

arXiv.org Artificial Intelligence

Predicting when to initiate speech in real-world environments remains a fundamental challenge for conversational agents. We introduce EgoSpeak, a novel framework for real-time speech initiation prediction in egocentric streaming video. By modeling the conversation from the speaker's first-person viewpoint, EgoSpeak is tailored for human-like interactions in which a conversational agent must continuously observe its environment and dynamically decide when to talk. Our approach bridges the gap between simplified experimental setups and complex natural conversations by integrating four key capabilities: (1) first-person perspective, (2) RGB processing, (3) online processing, and (4) untrimmed video processing. We also present YT-Conversation, a diverse collection of in-the-wild conversational videos from YouTube, as a resource for large-scale pretraining. Experiments on EasyCom and Ego4D demonstrate that EgoSpeak outperforms random and silence-based baselines in real time. Our results also highlight the importance of multimodal input and context length in effectively deciding when to speak.


An Unbiased Look at Datasets for Visuo-Motor Pre-Training

Dasari, Sudeep, Srirama, Mohan Kumar, Jain, Unnat, Gupta, Abhinav

arXiv.org Artificial Intelligence

Visual representation learning hold great promise for robotics, but is severely hampered by the scarcity and homogeneity of robotics datasets. Recent works address this problem by pre-training visual representations on large-scale but out-of-domain data (e.g., videos of egocentric interactions) and then transferring them to target robotics tasks. While the field is heavily focused on developing better pre-training algorithms, we find that dataset choice is just as important to this paradigm's success. After all, the representation can only learn the structures or priors present in the pre-training dataset. To this end, we flip the focus on algorithms, and instead conduct a dataset centric analysis of robotic pre-training. Our findings call into question some common wisdom in the field. We observe that traditional vision datasets (like ImageNet, Kinetics and 100 Days of Hands) are surprisingly competitive options for visuo-motor representation learning, and that the pre-training dataset's image distribution matters more than its size. Finally, we show that common simulation benchmarks are not a reliable proxy for real world performance and that simple regularization strategies can dramatically improve real world policy learning. https://data4robotics.github.io


Summarize the Past to Predict the Future: Natural Language Descriptions of Context Boost Multimodal Object Interaction

Pasca, Razvan-George, Gavryushin, Alexey, Kuo, Yen-Ling, Van Gool, Luc, Hilliges, Otmar, Wang, Xi

arXiv.org Artificial Intelligence

We study object interaction anticipation in egocentric videos. This task requires an understanding of the spatiotemporal context formed by past actions on objects, coined action context. We propose TransFusion, a multimodal transformer-based architecture. It exploits the representational power of language by summarising the action context. TransFusion leverages pre-trained image captioning and vision-language models to extract the action context from past video frames. This action context together with the next video frame is processed by the multimodal fusion module to forecast the next object interaction. Our model enables more efficient end-to-end learning. The large pre-trained language models add common sense and a generalisation capability. Experiments on Ego4D and EPIC-KITCHENS-100 show the effectiveness of our multimodal fusion model. They also highlight the benefits of using language-based context summaries in a task where vision seems to suffice. Our method outperforms state-of-the-art approaches by 40.4% in relative terms in overall mAP on the Ego4D test set. We validate the effectiveness of TransFusion via experiments on EPIC-KITCHENS-100. Video and code are available at https://eth-ait.github.io/transfusion-proj/.


Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?

Majumdar, Arjun, Yadav, Karmesh, Arnaud, Sergio, Ma, Yecheng Jason, Chen, Claire, Silwal, Sneha, Jain, Aryan, Berges, Vincent-Pierre, Abbeel, Pieter, Malik, Jitendra, Batra, Dhruv, Lin, Yixin, Maksymets, Oleksandr, Rajeswaran, Aravind, Meier, Franziska

arXiv.org Artificial Intelligence

We present the largest and most comprehensive empirical study of pre-trained visual representations (PVRs) or visual 'foundation models' for Embodied AI. First, we curate CortexBench, consisting of 17 different tasks spanning locomotion, navigation, dexterous, and mobile manipulation. Next, we systematically evaluate existing PVRs and find that none are universally dominant. To study the effect of pre-training data scale and diversity, we combine over 4,000 hours of egocentric videos from 7 different sources (over 5.6M images) and ImageNet to train different-sized vision transformers using Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from prior work, we find that scaling dataset size and diversity does not improve performance universally (but does so on average). Our largest model, named VC-1, outperforms all prior PVRs on average but does not universally dominate either. Finally, we show that task or domain-specific adaptation of VC-1 leads to substantial gains, with VC-1 (adapted) achieving competitive or superior performance than the best known results on all of the benchmarks in CortexBench. These models required over 10,000 GPU-hours to train and can be found on our website for the benefit of the research community.