convex domain
Finite sample bounds for barycenter estimation in geodesic spaces
Brunel, Victor-Emmanuel, Serres, Jordan
We study the problem of estimating the barycenter of a distribution given i.i.d. data in a geodesic space. Assuming an upper curvature bound in Alexandrov's sense and a support condition ensuring the strong geodesic convexity of the barycenter problem, we establish finite-sample error bounds in expectation and with high probability. Our results generalize Hoeffding- and Bernstein-type concentration inequalities from Euclidean to geodesic spaces. Building on these concentration inequalities, we derive statistical guarantees for two efficient algorithms for the computation of barycenters.
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)
The Lazy Online Subgradient Algorithm is Universal on Strongly Convex Domains
We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve O(\sqrt N) regret against adversarial opponents; here we show it is universal in the sense that it also achieves O(\log N) expected regret against i.i.d opponents. In addition we show that, unlike for the simplex, order bounds for pseudo-regret and expected regret are equivalent for strongly convex domains.
Online Non-Monotone DR-submodular Maximization
Thang, Nguyen Kim, Srivastav, Abhinav
In this paper, we study problems at the interface of two important fields: \emph{submodular optimization} and \emph{online learning}. Submodular functions play a vital role in modelling cost functions that naturally arise in many areas of discrete optimization. These functions have been studied under various models of computation. Independently, submodularity has been considered in continuous domains. In fact, many problems arising in machine learning and statistics have been modelled using continuous DR-submodular functions. In this work, we are study the problem of maximizing \textit{non-monotone} continuous DR-submodular functions within the framework of online learning. We provide three main results. First, we present an online algorithm (in full-information setting) that achieves an approximation guarantee (depending on the search space) for the problem of maximizing non-monotone continuous DR-submodular functions over a \emph{general} convex domain. To best of our knowledge, no prior approximation algorithm in full-information setting was known for the non-monotone continuous DR submodular functions even for the \emph{down-closed} convex domain. Second, we show that the online stochastic mirror ascent algorithm (in full information setting) achieves an improved approximation ratio of $(1/4)$ for maximizing the non-monotone continuous DR-submodular functions over a \emph{down-closed} convex domain. At last, we extend our second result to the bandit setting where we present the first approximation guarantee of $(1/4)$. To best of our knowledge, no approximation algorithm for non-monotone submodular maximization was known in the bandit setting.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France (0.04)
- Research Report (0.50)
- Personal (0.34)
Constraint Satisfaction Problems: Convexity Makes AllDifferent Constraints Tractable
Fellows, Michael (Charles Darwin Universi) | Friedrich, Tobias (Max-Planck-Institut für Informatik) | Hermelin, Danny (Max-Planck-Institut für Informatik) | Narodytska, Nina (NICTA and University of New South Wales) | Rosamond, Frances (Charles Darwin University)
We examine the complexity of constraint satisfaction problems that consist of a set of AllDiff constraints. Such CSPs naturally model a wide range of real-world and combinatorial problems, like scheduling, frequency allocations and graph coloring problems. As this problem is known to be NP-complete, we investigate under which further assumptions it becomes tractable. We observe that a crucial property seems to be the convexity of the variable domains and constraints. Our main contribution is an extensive study of the complexity of Multiple AllDiff CSPs for a set of natural parameters, like maximum domain size and maximum size of the constraint scopes. We show that, depending on the parameter, convexity can make the problem tractable while it is provably intractable in general.
- Oceania > Australia > Northern Territory > Darwin (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)