fang
You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection
Can Transformer perform $2\mathrm{D}$ object-and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the $2\mathrm{D}$ spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-$1k$ dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain $42.0$ box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS. Code and pre-trained models are available at https://github.com/hustvl/YOLOS.
Exploring Fixed Point in Image Editing: Theoretical Support and Convergence Optimization
In image editing, Denoising Diffusion Implicit Models (DDIM) inversion has become a widely adopted method and is extensively used in various image editing approaches. The core concept of DDIM inversion stems from the deterministic sampling technique of DDIM, which allows the DDIM process to be viewed as an Ordinary Differential Equation (ODE) process that is reversible. This enables the prediction of corresponding noise from a reference image, ensuring that the restored image from this noise remains consistent with the reference image. Image editing exploits this property by modifying the cross-attention between text and images to edit specific objects while preserving the remaining regions. However, in the DDIM inversion, using the $t-1$ time step to approximate the noise prediction at time step $t$ introduces errors between the restored image and the reference image.
How snake bites really work
Vipers can strike within 100 milliseconds of launching at their prey. Breakthroughs, discoveries, and DIY tips sent every weekday. A venomous snake bite is not something you ever want to encounter on a hiking or camping trip. For those brave scientists who study snakes-aka herpetologists -the mechanics behind the reptiles' fast fangs are more fascinating than fear-inducing. Snakes must move incredibly quickly to sink their fangs into prey before the victim flinches.
MuteSwap: Visual-informed Silent Video Identity Conversion
Liu, Yifan, Fang, Yu, Lin, Zhouhan
Conventional voice conversion modifies voice characteristics from a source speaker to a target speaker, relying on audio input from both sides. However, this process becomes infeasible when clean audio is unavailable, such as in silent videos or noisy environments. In this work, we focus on the task of Silent Face-based Voice Conversion (SFVC), which does voice conversion entirely from visual inputs. i.e., given images of a target speaker and a silent video of a source speaker containing lip motion, SFVC generates speech aligning the identity of the target speaker while preserving the speech content in the source silent video. As this task requires generating intelligible speech and converting identity using only visual cues, it is particularly challenging. To address this, we introduce MuteSwap, a novel framework that employs contrastive learning to align cross-modality identities and minimize mutual information to separate shared visual features. Experimental results show that MuteSwap achieves impressive performance in both speech synthesis and identity conversion, especially under noisy conditions where methods dependent on audio input fail to produce intelligible results, demonstrating both the effectiveness of our training approach and the feasibility of SFVC.
- North America > United States (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
When blood hits clothes, physics takes over
Breakthroughs, discoveries, and DIY tips sent every weekday. Creating mock crime scene evidence can help forensic scientists better read the stories left behind by gruesome bloodstains. To decode some of these bloody stories, all a team from North Carolina State University needed was a combination of high-speed cameras, cotton fabrics, and a bit of pig's blood. Forensic science is a relatively new concept, historically speaking. There are multiple major moments in its development, but the field of study can largely be traced back 115 years ago to a man named Edmond Locard.
Exploring Fixed Point in Image Editing: Theoretical Support and Convergence Optimization
In image editing, Denoising Diffusion Implicit Models (DDIM) inversion has become a widely adopted method and is extensively used in various image editing approaches. The core concept of DDIM inversion stems from the deterministic sampling technique of DDIM, which allows the DDIM process to be viewed as an Ordinary Differential Equation (ODE) process that is reversible. This enables the prediction of corresponding noise from a reference image, ensuring that the restored image from this noise remains consistent with the reference image. Image editing exploits this property by modifying the cross-attention between text and images to edit specific objects while preserving the remaining regions. However, in the DDIM inversion, using the t-1 time step to approximate the noise prediction at time step t introduces errors between the restored image and the reference image.
Robust Deep Reinforcement Learning in Robotics via Adaptive Gradient-Masked Adversarial Attacks
Zhang, Zongyuan, Duan, Tianyang, Lin, Zheng, Huang, Dong, Fang, Zihan, Sun, Zekai, Xiong, Ling, Liang, Hongbin, Cui, Heming, Cui, Yong, Gao, Yue
Deep reinforcement learning (DRL) has emerged as a promising approach for robotic control, but its realworld deployment remains challenging due to its vulnerability to environmental perturbations. Existing white-box adversarial attack methods, adapted from supervised learning, fail to effectively target DRL agents as they overlook temporal dynamics and indiscriminately perturb all state dimensions, limiting their impact on long-term rewards. To address these challenges, we propose the Adaptive Gradient-Masked Reinforcement (AGMR) Attack, a white-box attack method that combines DRL with a gradient-based soft masking mechanism to dynamically identify critical state dimensions and optimize adversarial policies. AGMR selectively allocates perturbations to the most impactful state features and incorporates a dynamic adjustment mechanism to balance exploration and exploitation during training. Extensive experiments demonstrate that AGMR outperforms state-of-the-art adversarial attack methods in degrading the performance of the victim agent and enhances the victim agent's robustness through adversarial defense mechanisms.
- Asia > Middle East > Jordan (0.04)
- Asia > China > Sichuan Province > Chengdu (0.04)
- Asia > China > Hong Kong (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
CP-Guard+: A New Paradigm for Malicious Agent Detection and Defense in Collaborative Perception
Hu, Senkang, Tao, Yihang, Fang, Zihan, Xu, Guowen, Deng, Yiqin, Kwong, Sam, Fang, Yuguang
Collaborative perception (CP) is a promising method for safe connected and autonomous driving, which enables multiple vehicles to share sensing information to enhance perception performance. However, compared with single-vehicle perception, the openness of a CP system makes it more vulnerable to malicious attacks that can inject malicious information to mislead the perception of an ego vehicle, resulting in severe risks for safe driving. To mitigate such vulnerability, we first propose a new paradigm for malicious agent detection that effectively identifies malicious agents at the feature level without requiring verification of final perception results, significantly reducing computational overhead. Building on this paradigm, we introduce CP-GuardBench, the first comprehensive dataset provided to train and evaluate various malicious agent detection methods for CP systems. Furthermore, we develop a robust defense method called CP-Guard+, which enhances the margin between the representations of benign and malicious features through a carefully designed Dual-Centered Contrastive Loss (DCCLoss). Finally, we conduct extensive experiments on both CP-GuardBench and V2X-Sim, and demonstrate the superiority of CP-Guard+.
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- North America > United States > Texas > Dallas County > Dallas (0.04)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Transportation > Ground > Road (0.49)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Scientific Discovery (0.61)
- Information Technology > Artificial Intelligence > Cognitive Science > Creativity & Intelligence (0.61)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.48)
Secure Resource Allocation via Constrained Deep Reinforcement Learning
Sun, Jianfei, Gao, Qiang, Wu, Cong, Li, Yuxian, Wang, Jiacheng, Niyato, Dusit
The proliferation of Internet of Things (IoT) devices and the advent of 6G technologies have introduced computationally intensive tasks that often surpass the processing capabilities of user devices. Efficient and secure resource allocation in serverless multi-cloud edge computing environments is essential for supporting these demands and advancing distributed computing. However, existing solutions frequently struggle with the complexity of multi-cloud infrastructures, robust security integration, and effective application of traditional deep reinforcement learning (DRL) techniques under system constraints. To address these challenges, we present SARMTO, a novel framework that integrates an action-constrained DRL model. SARMTO dynamically balances resource allocation, task offloading, security, and performance by utilizing a Markov decision process formulation, an adaptive security mechanism, and sophisticated optimization techniques. Extensive simulations across varying scenarios, including different task loads, data sizes, and MEC capacities, show that SARMTO consistently outperforms five baseline approaches, achieving up to a 40% reduction in system costs and a 41.5% improvement in energy efficiency over state-of-the-art methods. These enhancements highlight SARMTO's potential to revolutionize resource management in intricate distributed computing environments, opening the door to more efficient and secure IoT and edge computing applications.
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
- Asia > China > Sichuan Province > Chengdu (0.04)
You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection
Can Transformer perform 2\mathrm{D} object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2\mathrm{D} spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet- 1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS. Code and pre-trained models are available at https://github.com/hustvl/YOLOS.