Goto

Collaborating Authors

 Özcan, Gözde


Learning Set Functions with Implicit Differentiation

arXiv.org Artificial Intelligence

Ou et al. (2022) introduce the problem of learning set functions from data generated by a so-called optimal subset oracle. Their approach approximates the underlying utility function with an energy-based model, whose parameters are estimated via mean-field variational inference. Ou et al. (2022) show this reduces to fixed point iterations; however, as the number of iterations increases, automatic differentiation quickly becomes computationally prohibitive due to the size of the Jacobians that are stacked during backpropagation. We address this challenge with implicit differentiation and examine the convergence conditions for the fixed-point iterations. We empirically demonstrate the efficiency of our method on synthetic and real-world subset selection applications including product recommendation, set anomaly detection and compound selection tasks.


Online Submodular Maximization via Online Convex Optimization

arXiv.org Artificial Intelligence

We study monotone submodular maximization under general matroid constraints in the online setting. We prove that online optimization of a large class of submodular functions, namely, weighted threshold potential functions, reduces to online convex optimization (OCO). This is precisely because functions in this class admit a concave relaxation; as a result, OCO policies, coupled with an appropriate rounding scheme, can be used to achieve sublinear regret in the combinatorial setting. We show that our reduction extends to many different versions of the online learning problem, including the dynamic regret, bandit, and optimistic-learning settings.


Stochastic Submodular Maximization via Polynomial Estimators

arXiv.org Artificial Intelligence

In this paper, we study stochastic submodular maximization problems with general matroid constraints, that naturally arise in online learning, team formation, facility location, influence maximization, active learning and sensing objective functions. In other words, we focus on maximizing submodular functions that are defined as expectations over a class of submodular functions with an unknown distribution. We show that for monotone functions of this form, the stochastic continuous greedy algorithm attains an approximation ratio (in expectation) arbitrarily close to $(1-1/e) \approx 63\%$ using a polynomial estimation of the gradient. We argue that using this polynomial estimator instead of the prior art that uses sampling eliminates a source of randomness and experimentally reduces execution time.


Submodular Maximization via Taylor Series Approximation

arXiv.org Artificial Intelligence

We then consider a class of submodular objectives that Submodular functions are set functions that exhibit a are a summation over non-linear functions of these multilinear diminishing returns property. They naturally arise in functions. Our key observation is that the polynomial many applications, including data summarization [2-4], expansions of these functions are again multilinear; facility location [5], recommendation systems [6], influence hence, compositions of multilinear functions with maximization [7], sensor placement [8], dictionary arbitrary analytic functions, that can be approximated learning [9, 10], and active learning [11]. In these problems, by a Taylor series, can be computed efficiently. A broad the goal is to maximize a submodular function range of problems, e.g., data summarization, influence subject to matroid constraints. These problems are in maximization, facility location, and cache networks (c.f.