MambaTAD: When State-Space Models Meet Long-Range Temporal Action Detection

Lu, Hui, Yu, Yi, Lu, Shijian, Rajan, Deepu, Ng, Boon Poh, Kot, Alex C., Jiang, Xudong

arXiv.org Artificial Intelligence 

Abstract--T emporal Action Detection (T AD) aims to identify and localize actions by determining their starting and ending frames within untrimmed videos. Recent Structured State-Space Models such as Mamba have demonstrated potential in T AD due to their long-range modeling capability and linear computational complexity. On the other hand, structured state-space models often face two key challenges in T AD, namely, decay of temporal context due to recursive processing and self-element conflict during global visual context modeling, which become more severe while handling long-span action instances. This paper presents MambaT AD, a new state-space T AD model that introduces long-range modeling and global feature detection capabilities for accurate temporal action detection. MambaT AD comprises two novel designs that complement each other with superior T AD performance. First, it introduces a Diagonal-Masked Bidirectional State-Space (DMBSS) module which effectively facilitates global feature fusion and temporal action detection. Second, it introduces a global feature fusion head that refines the detection progressively with multi-granularity features and global awareness. In addition, MambaT AD tackles T AD in an end-to-end one-stage manner using a new state-space temporal adapter(SST A) which reduces network parameters and computation cost with linear complexity. Extensive experiments show that MambaT AD achieves superior T AD performance consistently across multiple public benchmarks. Emporal action detection (T AD) aims to detect specific action categories and extract corresponding temporal spans in untrimmed videos. It is a long-standing and challenging problem in video understanding with extensive real-world applications such as sports analysis, surveillance and security. The development of deep neural networks such as CNNs [1], [2] and Transformers [3], [4] has led to continuous advancements in T AD performance over the past few years. However, CNNs have limited capabilities in capturing long-range dependencies, while Transformers face challenges with computational complexity and feature discrimination [1]. Hui Lu and Yi Y u are with the Rapid-Rich Object Search Lab, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore, (e-mail: {hui007, yuyi0010}@e.ntu.edu.sg).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found