Goto

Collaborating Authors

 Vikas Garg



Learning Tree Structured Potential Games

Neural Information Processing Systems

Many real phenomena, including behaviors, involve strategic interactions that can be learned from data. We focus on learning tree structured potential games where equilibria are represented by local maxima of an underlying potential function. We cast the learning problem within a max margin setting and show that the problem is NP-hard even when the strategic interactions form a tree. We develop a variant of dual decomposition to estimate the underlying game and demonstrate with synthetic and real decision/voting data that the game theoretic perspective (carving out local maxima) enables meaningful recovery.


Local Aggregative Games

Neural Information Processing Systems

Structured prediction methods have been remarkably successful in learning mappings between input observations and output configurations [1; 2; 3]. The central guiding formulation involves learning a scoring function that recovers the configuration as the highest scoring assignment. In contrast, in a game theoretic setting, myopic strategic interactions among players lead to a Nash equilibrium or locally optimal configuration rather than highest scoring global configuration. Learning games therefore involves, at best, enforcement of local consistency constraints as recently advocated [4].


Supervising Unsupervised Learning

Neural Information Processing Systems

We introduce a framework to transfer knowledge acquired from a repository of (heterogeneous) supervised datasets to new unsupervised datasets. Our perspective avoids the subjectivity inherent in unsupervised learning by reducing it to supervised learning, and provides a principled way to evaluate unsupervised algorithms. We demonstrate the versatility of our framework via rigorous agnostic bounds on a variety of unsupervised problems. In the context of clustering, our approach helps choose the number of clusters and the clustering algorithm, remove the outliers, and provably circumvent Kleinberg's impossibility result. Experiments across hundreds of problems demonstrate improvements in performance on unsupervised data with simple algorithms despite the fact our problems come from heterogeneous domains. Additionally, our framework lets us leverage deep networks to learn common features across many small datasets, and perform zero shot learning.


Learning SMaLL Predictors

Neural Information Processing Systems

We introduce a new framework for learning in severely resource-constrained settings. Our technique delicately amalgamates the representational richness of multiple linear predictors with the sparsity of Boolean relaxations, and thereby yields classifiers that are compact, interpretable, and accurate. We provide a rigorous formalism of the learning problem, and establish fast convergence of the ensuing algorithm via relaxation to a minimax saddle point objective.


Local Aggregative Games

Neural Information Processing Systems

Structured prediction methods have been remarkably successful in learning mappings between input observations and output configurations [1; 2; 3]. The central guiding formulation involves learning a scoring function that recovers the configuration as the highest scoring assignment. In contrast, in a game theoretic setting, myopic strategic interactions among players lead to a Nash equilibrium or locally optimal configuration rather than highest scoring global configuration. Learning games therefore involves, at best, enforcement of local consistency constraints as recently advocated [4].