Goto

Collaborating Authors

 convex domain



The Lazy Online Subgradient Algorithm is Universal on Strongly Convex Domains

Neural Information Processing Systems

We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve $O(\sqrt N)$ regret against adversarial opponents; here we show it is universal in the sense that it also achieves $O(\log N)$ expected regret against i.i.d opponents.



Finite sample bounds for barycenter estimation in geodesic spaces

Brunel, Victor-Emmanuel, Serres, Jordan

arXiv.org Machine Learning

We study the problem of estimating the barycenter of a distribution given i.i.d. data in a geodesic space. Assuming an upper curvature bound in Alexandrov's sense and a support condition ensuring the strong geodesic convexity of the barycenter problem, we establish finite-sample error bounds in expectation and with high probability. Our results generalize Hoeffding- and Bernstein-type concentration inequalities from Euclidean to geodesic spaces. Building on these concentration inequalities, we derive statistical guarantees for two efficient algorithms for the computation of barycenters.


The Lazy Online Subgradient Algorithm is Universal on Strongly Convex Domains

Neural Information Processing Systems

We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve O(\sqrt N) regret against adversarial opponents; here we show it is universal in the sense that it also achieves O(\log N) expected regret against i.i.d opponents. In addition we show that, unlike for the simplex, order bounds for pseudo-regret and expected regret are equivalent for strongly convex domains.


Online Non-Monotone DR-submodular Maximization

Thang, Nguyen Kim, Srivastav, Abhinav

arXiv.org Machine Learning

In this paper, we study problems at the interface of two important fields: \emph{submodular optimization} and \emph{online learning}. Submodular functions play a vital role in modelling cost functions that naturally arise in many areas of discrete optimization. These functions have been studied under various models of computation. Independently, submodularity has been considered in continuous domains. In fact, many problems arising in machine learning and statistics have been modelled using continuous DR-submodular functions. In this work, we are study the problem of maximizing \textit{non-monotone} continuous DR-submodular functions within the framework of online learning. We provide three main results. First, we present an online algorithm (in full-information setting) that achieves an approximation guarantee (depending on the search space) for the problem of maximizing non-monotone continuous DR-submodular functions over a \emph{general} convex domain. To best of our knowledge, no prior approximation algorithm in full-information setting was known for the non-monotone continuous DR submodular functions even for the \emph{down-closed} convex domain. Second, we show that the online stochastic mirror ascent algorithm (in full information setting) achieves an improved approximation ratio of $(1/4)$ for maximizing the non-monotone continuous DR-submodular functions over a \emph{down-closed} convex domain. At last, we extend our second result to the bandit setting where we present the first approximation guarantee of $(1/4)$. To best of our knowledge, no approximation algorithm for non-monotone submodular maximization was known in the bandit setting.


Constraint Satisfaction Problems: Convexity Makes AllDifferent Constraints Tractable

Fellows, Michael (Charles Darwin Universi) | Friedrich, Tobias (Max-Planck-Institut für Informatik) | Hermelin, Danny (Max-Planck-Institut für Informatik) | Narodytska, Nina (NICTA and University of New South Wales) | Rosamond, Frances (Charles Darwin University)

AAAI Conferences

We examine the complexity of constraint satisfaction problems that consist of a set of AllDiff constraints. Such CSPs naturally model a wide range of real-world and combinatorial problems, like scheduling, frequency allocations and graph coloring problems. As this problem is known to be NP-complete, we investigate under which further assumptions it becomes tractable. We observe that a crucial property seems to be the convexity of the variable domains and constraints. Our main contribution is an extensive study of the complexity of Multiple AllDiff CSPs for a set of natural parameters, like maximum domain size and maximum size of the constraint scopes. We show that, depending on the parameter, convexity can make the problem tractable while it is provably intractable in general.