trajectory attention
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Oceania > Australia > New South Wales > Sydney (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.15)
- Oceania > Australia > New South Wales > Sydney (0.04)
Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers
In video transformers, the time dimension is often treated in the same way as the two spatial dimensions. However, in a scene where objects or the camera may move, a physical point imaged at one location in frame t may be entirely unrelated to what is found at that location in frame t k . These temporal correspondences should be modeled to facilitate learning about dynamic scenes. To this end, we propose a new drop-in block for video transformers - trajectory attention - that aggregates information along implicitly determined motion paths. We additionally propose a new method to address the quadratic dependence of computation and memory on the input size, which is particularly important for high resolution or long videos. While these ideas are useful in a range of settings, we apply them to the specific task of video action recognition with a transformer model and obtain state-of-the-art results on the Kinetics, Something-Something V2, and Epic-Kitchens datasets.
Actra: Optimized Transformer Architecture for Vision-Language-Action Models in Robot Learning
Ma, Yueen, Chi, Dafeng, Wu, Shiguang, Liu, Yuecheng, Zhuang, Yuzheng, Hao, Jianye, King, Irwin
Vision-language-action models have gained significant attention for their ability to model trajectories in robot learning. However, most existing models rely on Transformer models with vanilla causal attention, which we find suboptimal for processing segmented multi-modal sequences. Additionally, the autoregressive generation approach falls short in generating multi-dimensional actions. In this paper, we introduce Actra, an optimized Transformer architecture featuring trajectory attention and learnable action queries, designed for effective encoding and decoding of segmented vision-language-action trajectories in robot imitation learning. Furthermore, we devise a multi-modal contrastive learning objective to explicitly align different modalities, complementing the primary behavior cloning objective. Through extensive experiments conducted across various environments, Actra exhibits substantial performance improvement when compared to state-of-the-art models in terms of generalizability, dexterity, and precision.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
- Asia > China > Hong Kong (0.04)