Goto

Collaborating Authors

Beyond the Birkhoff Polytope: Convex Relaxations for Vector Permutation Problems

Neural Information Processing Systems

The Birkhoff polytope (the convex hull of the set of permutation matrices), which is represented using $\Theta(n 2)$ variables and constraints, is frequently invoked in formulating relaxations of optimization problems over permutations. Using a recent construction of Goemans (2010), we show that when optimizing over the convex hull of the permutation vectors (the permutahedron), we can reduce the number of variables and constraints to $\Theta(n \log n)$ in theory and $\Theta(n \log 2 n)$ in practice. We modify the recent convex formulation of the 2-SUM problem introduced by Fogel et al. (2013) to use this polytope, and demonstrate how we can attain results of similar quality in significantly less computational time for large $n$. To our knowledge, this is the first usage of Goemans' compact formulation of the permutahedron in a convex optimization problem. We also introduce a simpler regularization scheme for this convex formulation of the 2-SUM problem that yields good empirical results.


Bootstrap Robust Prescriptive Analytics

arXiv.org Machine Learning

We address the problem of prescribing an optimal decision in a framework where its cost depends on uncertain problem parameters $Y$ that need to be learned from data. Earlier work by Bertsimas and Kallus (2014) transforms classical machine learning methods that merely predict $Y$ from supervised training data $[(x_1, y_1), \dots, (x_n, y_n)]$ into prescriptive methods taking optimal decisions specific to a particular covariate context $X=\bar x$. Their prescriptive methods factor in additional observed contextual information on a potentially large number of covariates $X=\bar x$ to take context specific actions $z(\bar x)$ which are superior to any static decision $z$. Any naive use of limited training data may, however, lead to gullible decisions over-calibrated to one particular data set. In this paper, we borrow ideas from distributionally robust optimization and the statistical bootstrap of Efron (1982) to propose two novel prescriptive methods based on (nw) Nadaraya-Watson and (nn) nearest-neighbors learning which safeguard against overfitting and lead to improved out-of-sample performance. Both resulting robust prescriptive methods reduce to tractable convex optimization problems and enjoy a limited disappointment on bootstrap data. We illustrate the data-driven decision-making framework and our novel robustness notion on a small news vendor problem as well as a small portfolio allocation problem.


Beyond the Birkhoff Polytope: Convex Relaxations for Vector Permutation Problems

Neural Information Processing Systems

The Birkhoff polytope (the convex hull of the set of permutation matrices), which is represented using $\Theta(n^2)$ variables and constraints, is frequently invoked in formulating relaxations of optimization problems over permutations. Using a recent construction of Goemans (2010), we show that when optimizing over the convex hull of the permutation vectors (the permutahedron), we can reduce the number of variables and constraints to $\Theta(n \log n)$ in theory and $\Theta(n \log^2 n)$ in practice. We modify the recent convex formulation of the 2-SUM problem introduced by Fogel et al. (2013) to use this polytope, and demonstrate how we can attain results of similar quality in significantly less computational time for large $n$. To our knowledge, this is the first usage of Goemans' compact formulation of the permutahedron in a convex optimization problem. We also introduce a simpler regularization scheme for this convex formulation of the 2-SUM problem that yields good empirical results.


Convex Formulation for Kernel PCA and its Use in Semi-Supervised Learning

arXiv.org Machine Learning

In this paper, Kernel PCA is reinterpreted as the solution to a convex optimization problem. Actually, there is a constrained convex problem for each principal component, so that the constraints guarantee that the principal component is indeed a solution, and not a mere saddle point. Although these insights do not imply any algorithmic improvement, they can be used to further understand the method, formulate possible extensions and properly address them. As an example, a new convex optimization problem for semi-supervised classification is proposed, which seems particularly well-suited whenever the number of known labels is small. Our formulation resembles a Least Squares SVM problem with a regularization parameter multiplied by a negative sign, combined with a variational principle for Kernel PCA. Our primal optimization principle for semi-supervised learning is solved in terms of the Lagrange multipliers. Numerical experiments in several classification tasks illustrate the performance of the proposed model in problems with only a few labeled data.


A Summary Of The Kernel Matrix, And How To Learn It Effectively Using Semidefinite Programming

arXiv.org Machine Learning

Kernel-based learning algorithms are widely used in machine learning for problems that make use of the similarity between object pairs. Such algorithms first embed all data points into an alternative space, where the inner product between object pairs specifies their distance in the embedding space. Applying kernel methods to partially labeled datasets is a classical challenge in this regard, requiring that the distances between unlabeled pairs must somehow be learnt using the labeled data. In this independent study, I will summarize the work of G. Lanckriet et al.'s work on "Learning the Kernel Matrix with Semidefinite Programming" used in support vector machines (SVM) algorithms for the transduction problem. Throughout the report, I have provide alternative explanations / derivations / analysis related to this work which is designed to ease the understanding of the original article.