Goto

Collaborating Authors

Online and Differentially-Private Tensor Decomposition

Neural Information Processing Systems

Tensor decomposition is positioned to be a pervasive tool in the era of big data. In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition. We propose simple variants of the tensor power method which enjoy these strong properties. We propose the first streaming method with a linear memory requirement. Moreover, we present a noise calibrated tensor power method with efficient privacy guarantees. At the heart of all these guarantees lies a careful perturbation analysis derived in this paper which improves up on the existing results significantly.


Online and Differentially-Private Tensor Decomposition

arXiv.org Machine Learning

In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition. We propose simple variants of the tensor power method which enjoy these strong properties. We present the first guarantees for online tensor power method which has a linear memory requirement. Moreover, we present a noise calibrated tensor power method with efficient privacy guarantees. At the heart of all these guarantees lies a careful perturbation analysis derived in this paper which improves up on the existing results significantly.


Fast and Guaranteed Tensor Decomposition via Sketching

Neural Information Processing Systems

Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce many novel ideas which are unique to tensors. We develop novel methods for randomized com- putation of tensor contractions via FFTs, without explicitly forming the tensors. Such tensor contractions are encountered in decomposition methods such as ten- sor power iterations and alternating least squares.


Singleshot : a scalable Tucker tensor decomposition

Neural Information Processing Systems

This paper introduces a new approach for the scalable Tucker decomposition problem. Given a tensor X, the method proposed allows to infer the latent factors by processing one subtensor drawn from X at a time. The key principle of our approach is based on the recursive computations of gradient and on cyclic update of factors involving only one single step of gradient descent. We further improve the computational efficiency of this algorithm by proposing an inexact gradient version. These two algorithms are backed with theoretical guarantees of convergence and convergence rate under mild conditions.


Sparse and Low-Rank Tensor Decomposition

Neural Information Processing Systems

Motivated by the problem of robust factorization of a low-rank tensor, we study the question of sparse and low-rank tensor decomposition. We present an efficient computational algorithm that modifies Leurgans' algoirthm for tensor factorization. Our method relies on a reduction of the problem to sparse and low-rank matrix decomposition via the notion of tensor contraction. We use well-understood convex techniques for solving the reduced matrix sub-problem which then allows us to perform the full decomposition of the tensor. We delineate situations where the problem is recoverable and provide theoretical guarantees for our algorithm.