Goto

Collaborating Authors

 wisca


WISCA: A Lightweight Model Transition Method to Improve LLM Training via Weight Scaling

Li, Jiacheng, Tan, Jianchao, Yang, Zhidong, Sun, Pingwei, Huo, Feiye, Qin, Jiayu, Sun, Yerui, Xie, Yuchen, Cai, Xunliang, Zhang, Xiangyu, He, Maoxin, Tan, Guangming, Jia, Weile, Zhao, Tong

arXiv.org Artificial Intelligence

Transformer architecture gradually dominates the LLM field. Recent advances in training optimization for Transformer-based large language models (LLMs) primarily focus on architectural modifications or optimizer adjustments. However, these approaches lack systematic optimization of weight patterns during training. Weight pattern refers to the distribution and relative magnitudes of weight parameters in a neural network. To address this issue, we propose a Weight Scaling method called WISCA to enhance training efficiency and model quality by strategically improving neural network weight patterns without changing network structures. By rescaling weights while preserving model outputs, WISCA indirectly optimizes the model's training trajectory. Experiments demonstrate that WISCA significantly improves convergence quality (measured by generalization capability and loss reduction), particularly in LLMs with Grouped Query Attention (GQA) architectures and LoRA fine-tuning tasks. Empirical results show 5.6% average improvement on zero-shot validation tasks and 2.12% average reduction in training perplexity across multiple architectures.


WISCA: A Consensus-Based Approach to Harmonizing Interpretability in Tabular Datasets

Banegas-Luna, Antonio Jesús, Pérez-Sánchez, Horacio, Martínez-Cortés, Carlos

arXiv.org Machine Learning

While predictive accuracy is often prioritized in machine learning (ML) models, interpretability remains essential in scientific and high-stakes domains. However, diverse interpretability algorithms frequently yield conflicting explanations, highlighting the need for consensus to harmonize results. In this study, six ML models were trained on six synthetic datasets with known ground truths, utilizing various model-agnostic inter-pretability techniques. Consensus explanations were generated using established methods and a novel approach: WISCA (Weighted Scaled Consensus Attributions), which integrates class probability and normalized attributions. WISCA consistently aligned with the most reliable individual method, underscoring the value of robust consensus strategies in improving explanation reliability.