Dong, Shuyu
Alternating minimization algorithms for graph regularized tensor completion
Guan, Yu, Dong, Shuyu, Gao, Bin, Absil, P. -A., Glineur, François
We consider a Canonical Polyadic (CP) decomposition approach to low-rank tensor completion (LRTC) by incorporating external pairwise similarity relations through graph Laplacian regularization on the CP factor matrices. The usage of graph regularization entails benefits in the learning accuracy of LRTC, but at the same time, induces coupling graph Laplacian terms that hinder the optimization of the tensor completion model. In order to solve graph-regularized LRTC, we propose efficient alternating minimization algorithms by leveraging the block structure of the underlying CP decomposition-based model. For the subproblems of alternating minimization, a linear conjugate gradient subroutine is specifically adapted to graph-regularized LRTC. Alternatively, we circumvent the complicating coupling effects of graph Laplacian terms by using an alternating directions method of multipliers. Based on the Kurdyka-{\L}ojasiewicz property, we show that the sequence generated by the proposed algorithms globally converges to a critical point of the objective function. Moreover, the complexity and convergence rate are also derived. In addition, numerical experiments including synthetic data and real data show that the graph regularized tensor completion model has improved recovery results compared to those without graph regularization, and that the proposed algorithms achieve gains in time efficiency over existing algorithms.
Learning Large Causal Structures from Inverse Covariance Matrix via Matrix Decomposition
Dong, Shuyu, Uemura, Kento, Fujii, Akito, Chang, Shuang, Koyanagi, Yusuke, Maruhashi, Koji, Sebag, Michèle
Learning causal structures from observational data is a fundamental yet highly complex problem when the number of variables is large. In this paper, we start from linear structural equation models (SEMs) and investigate ways of learning causal structures from the inverse covariance matrix. The proposed method, called $\mathcal{O}$-ICID (for {\it Independence-preserving} Decomposition from Oracle Inverse Covariance matrix), is based on continuous optimization of a type of matrix decomposition that preserves the nonzero patterns of the inverse covariance matrix. We show that $\mathcal{O}$-ICID provides an efficient way for identifying the true directed acyclic graph (DAG) under the knowledge of noise variances. With weaker prior information, the proposed method gives directed graph solutions that are useful for making more refined causal discovery. The proposed method enjoys a low complexity when the true DAG has bounded node degrees, as reflected by its time efficiency in experiments in comparison with state-of-the-art algorithms.
From graphs to DAGs: a low-complexity model and a scalable algorithm
Dong, Shuyu, Sebag, Michèle
Learning directed acyclic graphs (DAGs) is long known a critical challenge at the core of probabilistic and causal modeling. The NoTears approach of (Zheng et al., 2018), through a differentiable function involving the matrix exponential trace $\mathrm{tr}(\exp(\cdot))$, opens up a way to learning DAGs via continuous optimization, though with a $O(d^3)$ complexity in the number $d$ of nodes. This paper presents a low-complexity model, called LoRAM for Low-Rank Additive Model, which combines low-rank matrix factorization with a sparsification mechanism for the continuous optimization of DAGs. The main contribution of the approach lies in an efficient gradient approximation method leveraging the low-rank property of the model, and its straightforward application to the computation of projections from graph matrices onto the DAG matrix space. The proposed method achieves a reduction from a cubic complexity to quadratic complexity while handling the same DAG characteristic function as NoTears, and scales easily up to thousands of nodes for the projection problem. The experiments show that the LoRAM achieves efficiency gains of orders of magnitude compared to the state-of-the-art at the expense of a very moderate accuracy loss in the considered range of sparse matrices, and with a low sensitivity to the rank choice of the model's low-rank component.