score map
- Europe > Switzerland (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
- North America > United States > Illinois (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States (0.28)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (2 more...)
- North America > United States (0.28)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (2 more...)
Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual Search in the Wild
Tan, Derek Ming Siang, Shailesh, null, Liu, Boyang, Raj, Alok, Ang, Qi Xuan, Dai, Weiheng, Duhan, Tanishq, Chiun, Jimmy, Cao, Yuhong, Shkurti, Florian, Sartoretti, Guillaume
To perform outdoor visual navigation and search, a robot may leverage satellite imagery to generate visual priors. This can help inform high-level search strategies, even when such images lack sufficient resolution for target recognition. However, many existing informative path planning or search-based approaches either assume no prior information, or use priors without accounting for how they were obtained. Recent work instead utilizes large Vision Language Models (VLMs) for generalizable priors, but their outputs can be inaccurate due to hallucination, leading to inefficient search. To address these challenges, we introduce Search-TTA, a multimodal test-time adaptation framework with a flexible plug-and-play interface compatible with various input modalities (e.g., image, text, sound) and planning methods (e.g., RL-based). First, we pretrain a satellite image encoder to align with CLIP's visual encoder to output probability distributions of target presence used for visual search. Second, our TTA framework dynamically refines CLIP's predictions during search using uncertainty-weighted gradient updates inspired by Spatial Poisson Point Processes. To train and evaluate Search-TTA, we curate AVS-Bench, a visual search dataset based on internet-scale ecological data containing 380k images and taxonomy data. We find that Search-TTA improves planner performance by up to 30.0%, particularly in cases with poor initial CLIP predictions due to domain mismatch and limited training data. It also performs comparably with significantly larger VLMs, and achieves zero-shot generalization via emergent alignment to unseen modalities. Finally, we deploy Search-TTA on a real UAV via hardware-in-the-loop testing, by simulating its operation within a large-scale simulation that provides onboard sensing.
RANet: Region Attention Network for Semantic Segmentation - Supplementary Material - Dingguo Shen
The first two authors share the contribution equally. Di Lin is the corresponding author of this paper. However, using the intermediate pixels requires extra computation. In Figure 3, we provide the segmentation results with/without using the intermediate pixel. In Table 2, we compare different strategies of using the representative scores in the region interaction. We also study the strategy of using only the representative scores in the region interaction.
- Asia > China > Hong Kong (0.06)
- North America > Canada (0.05)
- Asia > China > Tianjin Province > Tianjin (0.05)
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Asia > China > Hong Kong (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Macao (0.04)
- (2 more...)