Reviews: MixLasso: Generalized Mixed Regression via Convex Atomic-Norm Regularization

Neural Information Processing Systems 

This paper considers a generalized version of the mixed regression problem, where we observe a collection of input-output samples, with the output corresponding to an additive combination of several mixture components/functions, and the goal is to find a collection of K functions that minimise the risk. The corresponding ERM problem is NP-Hard to solve due to combinatorial constraints. The authors propose to relax these constraints by replacing them with an atomic norm regularizer they introduce as an "approximation" of the number of components. They propose to solve the resulting convex problem using a greedy algorithm. Their analysis show that the solutions obtained by their approach achieve epsilon-optimal risk using a linear number of samples (both in terms of K and the dimension D) and O(K/epsilon) number of components, thus improving over the state-of-the-art in terms of number of sample complexity.