Not enough data to create a plot.
Try a different view from the menu above.
Munos, Rémi
Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions
Carpentier, Alexandra, Munos, Rémi
We consider the problem of adaptive stratified sampling for Monte Carlo integration of a differentiable function given a finite number of evaluations to the function. We construct a sampling scheme that samples more often in regions where the function oscillates more, while allocating the samples such that they are well spread on the domain (this notion shares similitude with low discrepancy). We prove that the estimate returned by the algorithm is almost as accurate as the estimate that an optimal oracle strategy (that would know the variations of the function everywhere) would return, and we provide a finite-sample analysis.
Adaptive Stratified Sampling for Monte-Carlo integration of Differentiable functions
Carpentier, Alexandra, Munos, Rémi
We consider the problem of adaptive stratified sampling for Monte Carlo integration of a differentiable function given a finite number of evaluations to the function. We construct a sampling scheme that samples more often in regions where the function oscillates more, while allocating the samples such that they are well spread on the domain (this notion shares similitude with low discrepancy). We prove that the estimate returned by the algorithm is almost similarly accurate as the estimate that an optimal oracle strategy (that would know the variations of the function everywhere) would return, and provide a finite-sample analysis.
Regret Bounds for Restless Markov Bandits
Ortner, Ronald, Ryabko, Daniil, Auer, Peter, Munos, Rémi
We consider the restless Markov bandit problem, in which the state of each arm evolves according to a Markov process independently of the learner's actions. We suggest an algorithm that after $T$ steps achieves $\tilde{O}(\sqrt{T})$ regret with respect to the best policy that knows the distributions of all arms. No assumptions on the Markov chains are made except that they are irreducible. In addition, we show that index-based policies are necessarily suboptimal for the considered problem.
Optimistic Optimization of a Deterministic Function without the Knowledge of its Smoothness
Munos, Rémi
We consider a global optimization problem of a deterministic function f in a semimetric space, given a finite budget ofnevaluations. The functionf is assumed to be locally smooth (around one of its global maxima) with respect to a semi-metric l. We describe two algorithms based on optimistic exploration that use a hierarchical partitioning of the space at all scales. A first contribution is an algorithm, DOO, that requires the knowledge of l. We report a finite-sample performance bound in terms of a measure of the quantity of near-optimal states. We then define a second algorithm, SOO, which does not require the knowledge of the semimetric l under which f is smooth, and whose performance is almost as good as DOO optimally-fitted.
Speedy Q-Learning
Ghavamzadeh, Mohammad, Kappen, Hilbert J., Azar, Mohammad G., Munos, Rémi
We introduce a new convergent variant of Q-learning, called speedy Q-learning, to address the problem of slow convergence in the standard form of the Q-learning algorithm. We prove a PAC bound on the performance of SQL, which shows that for an MDP with n state-action pairs and the discount factor \gamma only T=O\big(\log(n)/(\epsilon^{2}(1-\gamma)^{4})\big) steps are required for the SQL algorithm to converge to an \epsilon-optimal action-value function with high probability. This bound has a better dependency on 1/\epsilon and 1/(1-\gamma), and thus, is tighter than the best available result for Q-learning. Our bound is also superior to the existing results for both model-free and model-based instances of batch Q-value iteration that are considered to be more efficient than the incremental methods like Q-learning.
Optimistic Optimization of a Deterministic Function without the Knowledge of its Smoothness
Munos, Rémi
We consider a global optimization problem of a deterministic function f in a semimetric space,given a finite budget ofnevaluations. The functionf is assumed to be locally smooth (around one of its global maxima) with respect to a semi-metric l. We describe two algorithms based on optimistic exploration that use a hierarchical partitioningof the space at all scales. A first contribution is an algorithm, DOO, that requires the knowledge of l. We report a finite-sample performance bound in terms of a measure of the quantity of near-optimal states. We then define a second algorithm, SOO, which does not require the knowledge of the semimetric lunder which f is smooth, and whose performance is almost as good as DOO optimally-fitted.
Selecting the State-Representation in Reinforcement Learning
Maillard, Odalric-ambrym, Ryabko, Daniil, Munos, Rémi
The problem of selecting the right state-representation in a reinforcement learning problem is considered. Several models (functions mapping past observations to a finite set) of the observations are given, and it is known that for at least one of these models the resulting state dynamics are indeed Markovian. Without knowing neither which of the models is the correct one, nor what are the probabilistic characteristics of the resulting MDP, it is required to obtain as much reward as the optimal policy for the correct model (or for the best of the correct models, if there are several). We propose an algorithm that achieves that, with a regret of order T^{2/3} where T is the horizon time.
Finite Time Analysis of Stratified Sampling for Monte Carlo
Carpentier, Alexandra, Munos, Rémi
We consider the problem of stratified sampling for Monte-Carlo integration. We model this problem in a multi-armed bandit setting, where the arms represent the strata, and the goal is to estimate a weighted average of the mean values of the arms. We propose a strategy that samples the arms according to an upper bound on their standard deviations and compare its estimation quality to an ideal allocation that would know the standard deviations of the arms. We provide two regret analyses: a distribution-dependent bound O(n^{-3/2}) that depends on a measure of the disparity of the arms, and a distribution-free bound O(n^{-4/3}) that does not. To the best of our knowledge, such a finite-time analysis is new for this problem.
Scrambled Objects for Least-Squares Regression
Maillard, Odalric, Munos, Rémi
We consider least-squares regression using a randomly generated subspace G_P\subset F of finite dimension P, where F is a function space of infinite dimension, e.g.~L_2([0,1]^d). G_P is defined as the span of P random features that are linear combinations of the basis functions of F weighted by random Gaussian i.i.d.~coefficients. In particular, we consider multi-resolution random combinations at all scales of a given mother function, such as a hat function or a wavelet. In this latter case, the resulting Gaussian objects are called {\em scrambled wavelets} and we show that they enable to approximate functions in Sobolev spaces H^s([0,1]^d). As a result, given N data, the least-squares estimate \hat g built from P scrambled wavelets has excess risk ||f^* - \hat g||_\P^2 = O(||f^*||^2_{H^s([0,1]^d)}(\log N)/P + P(\log N )/N) for target functions f^*\in H^s([0,1]^d) of smoothness order s>d/2. An interesting aspect of the resulting bounds is that they do not depend on the distribution \P from which the data are generated, which is important in a statistical regression setting considered here. Randomization enables to adapt to any possible distribution. We conclude by describing an efficient numerical implementation using lazy expansions with numerical complexity \tilde O(2^d N^{3/2}\log N + N^2), where d is the dimension of the input space.
LSTD with Random Projections
Ghavamzadeh, Mohammad, Lazaric, Alessandro, Maillard, Odalric, Munos, Rémi
We consider the problem of reinforcement learning in high-dimensional spaces when the number of features is bigger than the number of samples. In particular, we study the least-squares temporal difference (LSTD) learning algorithm when a space of low dimension is generated with a random projection from a high-dimensional space. We provide a thorough theoretical analysis of the LSTD with random projections and derive performance bounds for the resulting algorithm. We also show how the error of LSTD with random projections is propagated through the iterations of a policy iteration algorithm and provide a performance bound for the resulting least-squares policy iteration (LSPI) algorithm.