GLAI: GreenLightningAI for Accelerated Training through Knowledge Decoupling
Mestre, Jose I., Fernández-Hernández, Alberto, Pérez-Corral, Cristian, Dolz, Manuel F., Duato, Jose, Quintana-Ortí, Enrique S.
–arXiv.org Artificial Intelligence
In this work we introduce GreenLightningAI (GLAI), a new architectural block designed as an alternative to conventional Multilayer Perceptrons (MLPs). The central idea is to separate two types of knowledge that are usually entangled during training: (i) structural knowledge, encoded by the stable activation patterns induced by Rectified Linear Unit (ReLU) activations; and (ii) quantitative knowledge, carried by the numerical weights and biases. By fixing the structure once stabilized, GLAI reformulates the MLP as a combination of paths, where only the quantitative component is optimized. This refor-mulation retains the universal approximation capabilities of MLPs, yet achieves a more efficient training process, reducing training time by 40% on average across the cases examined in this study. Crucially, GLAI is not just another classifier, but a generic block that can replace MLPs wherever they are used, from supervised heads with frozen backbones to projection layers in self-supervised learning or few-shot classifiers. Across diverse experimental setups, GLAI consistently matches or exceeds the accuracy of MLPs with an equivalent number of parameters, while converging faster. Overall, GLAI establishes a new design principle that opens a direction for future integration into large-scale architectures such as Transformers, where MLP blocks dominate the computational footprint.
arXiv.org Artificial Intelligence
Oct-2-2025