cognitive attack
A Neurosymbolic Framework for Interpretable Cognitive Attack Detection in Augmented Reality
Chen, Rongqian, Andreyev, Allison, Xiu, Yanming, Chilukuri, Joshua, Sen, Shunav, Imani, Mahdi, Li, Bin, Gorlatova, Maria, Tan, Gang, Lan, Tian
Augmented Reality (AR) enriches human perception by overlaying virtual elements onto the physical world. However, this tight coupling between virtual and real content makes AR vulnerable to cognitive attacks: manipulations that distort users' semantic understanding of the environment. Existing detection methods largely focus on visual inconsistencies at the pixel or image level, offering limited semantic reasoning or interpretability. To address these limitations, we introduce CADAR, a neuro-symbolic framework for cognitive attack detection in AR that integrates neural and symbolic reasoning. CADAR fuses multimodal vision-language representations from pre-trained models into a perception graph that captures objects, relations, and temporal contextual salience. Building on this structure, a particle-filter-based statistical reasoning module infers anomalies in semantic dynamics to reveal cognitive attacks. This combination provides both the adaptability of modern vision-language models and the interpretability of probabilistic symbolic reasoning. Preliminary experiments on an AR cognitive-attack dataset demonstrate consistent advantages over existing approaches, highlighting the potential of neuro-symbolic methods for robust and interpretable AR security.
- North America > United States > Pennsylvania (0.04)
- North America > United States > California > Orange County > Anaheim (0.04)
- Overview (0.66)
- Research Report (0.64)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
Council Post: AI Vs. AI: The Battle Against Human-Level Cognitive Threats
The world around us is full of arguments and evidence for the benefit of artificial intelligence (AI) in our daily lives. But the specter of AI threats looms large in today's world. While there's plenty of fear around the future of human-level AI, there's debate over whether AI today is truly working in our best interest. But what many people don't know is that AI is already being used by cybercriminals to attack them at scale with cyber threats, like cognitive attacks, that are only possible with AI. One of the greatest threats that AI represents today is how it can be abused by cybercriminals, specifically in its capability to deceive people and trick them into engaging in actions with unwanted or underestimated consequences.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.37)