FT-NCFM: An Influence-Aware Data Distillation Framework for Efficient VLA Models
Chen, Kewei, Long, Yayu, Li, Shuai, Shang, Mingsheng
–arXiv.org Artificial Intelligence
The powerful generalization of Vision-Language-Action (VLA) models is bottlenecked by their heavy reliance on massive, redundant, and unevenly valued datasets, hindering their widespread application. Existing model-centric optimization paths, such as model compression (which often leads to performance degradation) or policy distillation (whose products are model-dependent and lack generality), fail to fundamentally address this data-level challenge. To this end, this paper introduces FT-NCFM, a fundamentally different, data-centric generative data distillation framework. Our framework employs a self-contained Fact-Tracing (FT) engine that combines causal attribution with programmatic contrastive verification to assess the intrinsic value of samples. Guided by these assessments, an adversarial NCFM process synthesizes a model-agnostic, information-dense, and reusable data asset. Experimental results on several mainstream VLA benchmarks show that models trained on just 5% of our distilled coreset achieve a success rate of 85-90% compared with training on the full dataset, while reducing training time by over 80%. Our work demonstrates that intelligent data distillation is a highly promising new path for building efficient, high-performance VLA models.
arXiv.org Artificial Intelligence
Nov-21-2025
- Country:
- Asia > China
- Chongqing Province > Chongqing (0.04)
- Europe > Finland
- Northern Ostrobothnia > Oulu (0.04)
- Asia > China
- Genre:
- Research Report > Experimental Study (0.46)
- Industry:
- Education (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.68)
- Natural Language (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence