FMCE-Net++: Feature Map Convergence Evaluation and Training
Zhu, Zhibo, Huang, Renyu, He, Lei
–arXiv.org Artificial Intelligence
Deep Neural Networks (DNNs) face interpretability challenges due to their opaque internal representations. While Feature Map Convergence Evaluation (FMCE) quantifies module-level convergence via Feature Map Convergence Scores (FMCS), it lacks experimental validation and closed-loop integration. To address this limitation, we propose FMCE-Net++, a novel training framework that integrates a pretrained, frozen FMCE-Net as an auxiliary head. This module generates FMCS predictions, which, combined with task labels, jointly supervise backbone optimization through a Representation Auxiliary Loss. The RAL dynamically balances the primary classification loss and feature convergence optimization via a tunable \Representation Abstraction Factor. Extensive experiments conducted on MNIST, CIFAR-10, FashionMNIST, and CIFAR-100 demonstrate that FMCE-Net++ consistently enhances model performance without architectural modifications or additional data. Key experimental outcomes include accuracy gains of $+1.16$ pp (ResNet-50/CIFAR-10) and $+1.08$ pp (ShuffleNet v2/CIFAR-100), validating that FMCE-Net++ can effectively elevate state-of-the-art performance ceilings.
arXiv.org Artificial Intelligence
Aug-19-2025
- Country:
- Asia > China
- Europe > Sweden (0.04)
- North America > United States
- New Mexico > Bernalillo County > Albuquerque (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Energy (0.34)
- Technology: