Global Convergence of Four-Layer Matrix Factorization under Random Initialization

Luo, Minrui, Xu, Weihang, Gao, Xiang, Fazel, Maryam, Du, Simon Shaolei

arXiv.org Artificial Intelligence 

Gradient descent dynamics on the deep matrix factorization problem is extensively studied as a simplified theoretical model for deep neural networks. Although the convergence theory for two-layer matrix factorization is well-established, no global convergence guarantee for general deep matrix factorization under random initialization has been established to date. To address this gap, we provide a polynomial-time global convergence guarantee for randomly initialized gradient descent on four-layer matrix factorization, given certain conditions on the target matrix and a standard balanced regularization term. Our analysis employs new techniques to show saddle-avoidance properties of gradient decent dynamics, and extends previous theories to characterize the change in eigenvalues of layer weights. Here F {C,R} as we consider both real and complex matrices in this paper. Following a long line of works (Arora et al., 2019a; Jiang et al., 2023; Y e & Du, 2021; Chou et al., 2024), we aim to understand the dynamics of gradient descent (GD) on this problem: j = 1,. . Work done while Minrui Luo was visiting the University of Washington. While the model representation power is independent of depth N, the deep matrix factorization problem is naturally motivated by the goal of understanding benefits of depth in deep learning (see, e.g., Arora et al. (2019b)).