MAST: Model-Agnostic Sparsified Training
Demidovich, Yury, Malinovsky, Grigory, Shulgin, Egor, Richtárik, Peter
–arXiv.org Artificial Intelligence
We introduce a novel optimization problem formulation that departs from the conventional way of minimizing machine learning model loss as a black-box function. Unlike traditional formulations, the proposed approach explicitly incorporates an initially pre-trained model and random sketch operators, allowing for sparsification of both the model and gradient during training. We establish insightful properties of the proposed objective function and highlight its connections to the standard formulation. Furthermore, we present several variants of the Stochastic Gradient Descent (SGD) method adapted to the new problem formulation, including SGD with general sampling, a distributed version, and SGD with variance reduction techniques. We achieve tighter convergence rates and relax assumptions, bridging the gap between theoretical principles and practical applications, covering several important techniques such as Dropout and Sparse training. This work presents promising opportunities to enhance the theoretical understanding of model training through a sparsification-aware optimization approach.
arXiv.org Artificial Intelligence
Nov-27-2023
- Country:
- Asia > Middle East
- Saudi Arabia (0.14)
- North America > United States
- California (0.14)
- Asia > Middle East
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Transportation (0.34)
- Technology: