Goto

Collaborating Authors

 spatial







SupplementaryMaterial: LearningRepresentations fromAudio-VisualSpatialAlignment

Neural Information Processing Systems

These are transformer networks of base dimension 512 and expansion ration 4. In other words,7 the output dimensionality of the linear transformations of parametersWkey,Wqr,Wval,W0 and8 W2 are 512, and that ofW1 is 2048. Models are pre-trained to optimize loss (7) for AVC task or9 (9)forAVTSandAVSAtasks. Asoriginallyproposed,15 lateral connections are implemented with a1 1 convolution that maps all feature maps into a16 128 dimensional space followed by a3 3convolution for increased smoothing. Thus, all pixels for which the state-of-the-art model was less25 than 75% confident were kept unlabeled. These low confidence regions were also ignored while26 computingevaluationmetrics.




FactorizePhys: Matrix Factorization for Multidimensional Attention in Remote Physiological Sensing

Neural Information Processing Systems

Remote photoplethysmography (rPPG) enables non-invasive extraction of blood volume pulse signals through imaging, transforming spatial-temporal data into time series signals. Advances in end-to-end rPPG approaches have focused on this transformation where attention mechanisms are crucial for feature extraction. However, existing methods compute attention disjointly across spatial, temporal, and channel dimensions. Here, we propose the Factorized Self-Attention Module (FSAM), which jointly computes multidimensional attention from voxel embeddings using nonnegative matrix factorization. To demonstrate FSAM's effectiveness, we developed FactorizePhys, an end-to-end 3D-CNN architecture for estimating blood volume pulse signals from raw video frames.


GrounDiT: Grounding Diffusion Transformers via Noisy Patch Transplantation

Neural Information Processing Systems

We introduce GrounDiT, a novel training-free spatial grounding technique for text-to-image generation using Diffusion Transformers (DiT). Spatial grounding with bounding boxes has gained attention for its simplicity and versatility, allowing for enhanced user control in image generation. However, prior training-free approaches often rely on updating the noisy image during the reverse diffusion process via backpropagation from custom loss functions, which frequently struggle to provide precise control over individual bounding boxes. In this work, we leverage the flexibility of the Transformer architecture, demonstrating that DiT can generate noisy patches corresponding to each bounding box, fully encoding the target object and allowing for fine-grained control over each region. Our approach builds on an intriguing property of DiT, which we refer to as semantic sharing. Due to semantic sharing, when a smaller patch is jointly denoised alongside a generatable-size image, the two become semantic clones. Each patch is denoised in its own branch of the generation process and then transplanted into the corresponding region of the original noisy image at each timestep, resulting in robust spatial grounding for each bounding box. In our experiments on the HRS and DrawBench benchmarks, we achieve state-of-the-art performance compared to previous training-free approaches.