Using Multiple Sources of Knowledge to Generate Design Decompositions David C. Brown & Jingwen Liu

AAAI Conferences

Design expert systems can be characterized by the type of compiled knowledge used. Compiled knowledge is efficient for design problem-solving with a bounded set of design problems, but often leads to failure in slightly different design problem situations. Design decomposition knowledge is often compiled into design systems. Design specification changes often impose problem decomposition changes. Generating decomposition knowledge for a slightly different design problem is difficult.


SPALS: Fast Alternating Least Squares via Implicit Leverage Scores Sampling

Neural Information Processing Systems

Tensor CANDECOMP/PARAFAC (CP) decomposition is a powerful but computationally challenging tool in modern data analytics. In this paper, we show ways of sampling intermediate steps of alternating minimization algorithms for computing low rank tensor CP decompositions, leading to the sparse alternating least squares (SPALS) method. Specifically, we sample the the Khatri-Rao product, which arises as an intermediate object during the iterations of alternating least squares. This product captures the interactions between different tensor modes, and form the main computational bottleneck for solving many tensor related tasks. By exploiting the spectral structures of the matrix Khatri-Rao product, we provide efficient access to its statistical leverage scores.


Witteveen

AAAI Conferences

Decomposition is a technique to obtain complete solutions by assembling independently obtained partial solutions. In particular, constraint decomposition plays an important role in distributed databases, distributed scheduling and violation detection: It enables conflict-free local decision making, while avoiding communication overloading. One of the main issues in decomposition is the loss of flexibility due to decomposition. Here, flexibility roughly refers to the freedom in choosing suitable values for the variables in order to satisfy the constraints. In this paper, we concentrate on linear constraint systems and efficient decomposition techniques for them. Using a generalization of a flexibility metric developed for Simple Temporal Networks, we show how an efficient decomposition technique for linear constraint systems can be derived that minimizes the loss of flexibility. As a by-product of this decomposition technique, we propose an intuitively attractive flexibility metric for linear constraint systems where decomposition does not incur any loss of flexibility.


Legendre Decomposition for Tensors

Neural Information Processing Systems

We present a novel nonnegative tensor decomposition method, called Legendre decomposition, which factorizes an input tensor into a multiplicative combination of parameters. Thanks to the well-developed theory of information geometry, the reconstructed tensor is unique and always minimizes the KL divergence from an input tensor. We empirically show that Legendre decomposition can more accurately reconstruct tensors than other nonnegative tensor decomposition methods. Papers published at the Neural Information Processing Systems Conference.


Provable Sparse Tensor Decomposition

arXiv.org Machine Learning

We propose a novel sparse tensor decomposition method, namely Tensor Truncated Power (TTP) method, that incorporates variable selection into the estimation of decomposition components. The sparsity is achieved via an efficient truncation step embedded in the tensor power iteration. Our method applies to a broad family of high dimensional latent variable models, including high dimensional Gaussian mixture and mixtures of sparse regressions. A thorough theoretical investigation is further conducted. In particular, we show that the final decomposition estimator is guaranteed to achieve a local statistical rate, and further strengthen it to the global statistical rate by introducing a proper initialization procedure. In high dimensional regimes, the obtained statistical rate significantly improves those shown in the existing non-sparse decomposition methods. The empirical advantages of TTP are confirmed in extensive simulated results and two real applications of click-through rate prediction and high-dimensional gene clustering.