SALMAN: Stability Analysis of Language Models Through the Maps Between Graph-based Manifolds
Cheng, Wuxinlin, Cao, Yupeng, Wu, Jinwen, Subbalakshmi, Koduvayur, Han, Tian, Feng, Zhuo
–arXiv.org Artificial Intelligence
Recent strides in pretrained transformer-based language models have propelled state-of-the-art performance in numerous NLP tasks. Yet, as these models grow in size and deployment, their robustness under input perturbations becomes an increasingly urgent question. Existing robustness methods often diverge between small-parameter and large-scale models (LLMs), and they typically rely on labor-intensive, sample-specific adversarial designs. In this paper, we propose a unified, local (sample-level) robustness framework (SALMAN) that evaluates model stability without modifying internal parameters or resorting to complex perturbation heuristics. Central to our approach is a novel Distance Mapping Distortion (DMD) measure, which ranks each sample's susceptibility by comparing input-to-output distance mappings in a near-linear complexity manner. By demonstrating significant gains in attack efficiency and robust training, we position our framework as a practical, model-agnostic tool for advancing the reliability of transformer-based NLP systems.
arXiv.org Artificial Intelligence
Aug-27-2025
- Country:
- Asia > China > Yunnan Province > Kunming (0.04)
- Genre:
- Research Report (0.64)
- Industry:
- Government > Military (0.46)
- Information Technology > Security & Privacy (0.46)
- Technology: