Reviews: Is Input Sparsity Time Possible for Kernel Low-Rank Approximation?

Neural Information Processing Systems 

The paper presents negative and positive results to the problem of computing low-rank approximations to the kernel matrix of a given dataset. More specifically, for a fixed kernel function and given some input data and a desired rank k, the authors consider the two related problems of (1) computing a rank k matrix which is relatively close to the optimal rank k approximation of the kernel matrix and of (2) computing an orthonormal basis of a k-dimensional space such that when projecting the data from the feature space (defined by the kernel) down to this k-dimensional space it is relatively close to the optimal such projection. In particular, the paper focuses on the question whether it is possible to solve these problems in a time that is independent of the dimensionality of the input data matrix A (say n times d) and instead depends on the number of non-zero entries in A. Regarding problem (1) the paper provides a negative answer for the linear kernel, all polynomial kernels, and the Gaussian kernel by showing that the problem is as hard as computing (exactly) the product of the input matrix with an arbitrary matrix C (of dim d times k). This implies that a solution cannot be achieved without a significant improvement of the state of the art in matrix multiplication. Regarding problem (2) the paper provides a positive result for shift-invariant kernels using an algorithm based on random Fourier features.