sp ace
Supplementary Materials A Appendix 1 A.1 Construction & Schema Details 2 A.1.1 Conversation Details 3
The hotel
- Asia > Thailand > Bangkok > Bangkok (0.05)
- Africa > South Africa (0.04)
- North America > Canada (0.04)
- (4 more...)
- Consumer Products & Services > Restaurants (1.00)
- Consumer Products & Services > Hotels (0.96)
- Transportation > Ground > Road (0.46)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Africa > South Africa (0.04)
- (13 more...)
- Overview (0.46)
- Research Report > New Finding (0.46)
- Law (0.93)
- Consumer Products & Services > Restaurants (0.68)
- Information Technology (0.68)
- Transportation > Ground > Road (0.46)
- Asia > Taiwan (0.05)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (3 more...)
- Research Report > New Finding (0.66)
- Research Report > Promising Solution (0.48)
- Information Technology > Security & Privacy (1.00)
- Education (0.68)
One SPACE to Rule Them All: Jointly Mitigating Factuality and Faithfulness Hallucinations in LLMs
Wang, Pengbo, Li, Chaozhuo, Wang, Chenxu, Zheng, Liwen, Zhang, Litian, Zhang, Xi
LLMs have demonstrated unprecedented capabilities in natural language processing, yet their practical deployment remains hindered by persistent factuality and faithfulness hallucinations. While existing methods address these hallucination types independently, they inadvertently induce performance trade-offs, as interventions targeting one type often exacerbate the other. Through empirical and theoretical analysis of activation space dynamics in LLMs, we reveal that these hallucination categories share overlapping subspaces within neural representations, presenting an opportunity for concurrent mitigation. To harness this insight, we propose SPACE, a unified framework that jointly enhances factuality and faithfulness by editing shared activation subspaces. SPACE establishes a geometric foundation for shared subspace existence through dual-task feature modeling, then identifies and edits these subspaces via a hybrid probe strategy combining spectral clustering and attention head saliency scoring. Experimental results across multiple benchmark datasets demonstrate the superiority of our approach.
Supplementary Materials A Appendix 1 A.1 Construction & Schema Details 2 A.1.1 Conversation Details 3
The hotel
- Asia > Thailand > Bangkok > Bangkok (0.05)
- Africa > South Africa (0.04)
- North America > Canada (0.04)
- (4 more...)
- Consumer Products & Services > Restaurants (1.00)
- Consumer Products & Services > Hotels (0.96)
- Transportation > Ground > Road (0.46)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Africa > South Africa (0.04)
- (13 more...)
- Overview (0.46)
- Research Report > New Finding (0.46)
- Law (0.93)
- Consumer Products & Services > Restaurants (0.68)
- Information Technology (0.68)
- Transportation > Ground > Road (0.46)
SPACE: SPike-Aware Consistency Enhancement for Test-Time Adaptation in Spiking Neural Networks
Luo, Xinyu, Chen, Kecheng, Sun, Pao-Sheng Vincent, Tian, Chris Xing, Basu, Arindam, Li, Haoliang
Spiking Neural Networks (SNNs), as a biologically plausible alternative to Artificial Neural Networks (ANNs), have demonstrated advantages in terms of energy efficiency, temporal processing, and biological plausibility. However, SNNs are highly sensitive to distribution shifts, which can significantly degrade their performance in real-world scenarios. Traditional test-time adaptation (TTA) methods designed for ANNs often fail to address the unique computational dynamics of SNNs, such as sparsity and temporal spiking behavior. To address these challenges, we propose SPike-Aware Consistency Enhancement (SPACE), the first source-free and single-instance TTA method specifically designed for SNNs. SPACE leverages the inherent spike dynamics of SNNs to maximize the consistency of spike-behavior-based local feature maps across augmented versions of a single test sample, enabling robust adaptation without requiring source data. We evaluate SPACE on multiple datasets. Furthermore, SPACE exhibits robust generalization across diverse network architectures, consistently enhancing the performance of SNNs on CNNs, Transformer, and ConvLSTM architectures. Experimental results show that SPACE outperforms state-of-the-art ANN methods while maintaining lower computational cost, highlighting its effectiveness and robustness for SNNs in real-world settings. The code will be available at https://github.com/ethanxyluo/SPACE.
SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition
Lin, Zhixuan, Wu, Yi-Fu, Peri, Skand Vishwanath, Sun, Weihao, Singh, Gautam, Deng, Fei, Jiang, Jindong, Ahn, Sungjin
The ability to decompose complex multi-object scenes into meaningful abstractions like objects is fundamental to achieve higher-level cognition. Previous approaches for unsupervised object-oriented scene representation learning are either based on spatial-attention or scene-mixture approaches and limited in scalability which is a main obstacle towards modeling real-world scenes. In this paper, we propose a generative latent variable model, called SPACE, that provides a unified probabilistic modeling framework that combines the best of spatial-attention and scene-mixture approaches. SPACE can explicitly provide factorized object representations for foreground objects while also decomposing background segments of complex morphology. Previous models are good at either of these, but not both. SPACE also resolves the scalability problems of previous methods by incorporating parallel spatial-attention and thus is applicable to scenes with a large number of objects without performance degradations. We show through experiments on Atari and 3D-Rooms that SPACE achieves the above properties consistently in comparison to SPAIR, IODINE, and GENESIS. Results of our experiments can be found on our project website: https://sites.google.com/view/space-project-page