Goto

Collaborating Authors

 dynamic regret


MetaCURL: Non-stationary Concave Utility Reinforcement Learning Bianca Marin Moreno Margaux Brégère Pierre Gaillard Nadia Oudjane Inria

Neural Information Processing Systems

We explore online learning in episodic Markov decision processes on non-stationary environments (changing losses and probability transitions). Our focus is on the Concave Utility Reinforcement Learning problem (CURL), an extension of classical RL for handling convex performance criteria in state-action distributions induced by agent policies. While various machine learning problems can be written as CURL, its non-linearity invalidates traditional Bellman equations.


Multi-armed Bandits: Competing with Optimal Sequences

Neural Information Processing Systems

We consider sequential decision making problem in the adversarial setting, where regret is measured with respect to the optimal sequence of actions and the feedback adheres the bandit setting. It is well-known that obtaining sublinear regret in this setting is impossible in general, which arises the question of when can we do better than linear regret? Previous works show that when the environment is guaranteed to vary slowly and furthermore we are given prior knowledge regarding its variation (i.e., a limit on the amount of changes suffered by the environment), then this task is feasible. The caveat however is that such prior knowledge is not likely to be available in practice, which causes the obtained regret bounds to be somewhat irrelevant. Our main result is a regret guarantee that scales with the variation parameter of the environment, without requiring any prior knowledge about it whatsoever. By that, we also resolve an open problem posted by Gur, Zeevi and Besbes [8]. An important key component in our result is a statistical test for identifying non-stationarity in a sequence of independent random variables. This test either identifies nonstationarity or upper-bounds the absolute deviation of the corresponding sequence of mean values in terms of its total variation. This test is interesting on its own right and has the potential to be found useful in additional settings.


Adaptive Gradient-Based Meta-Learning Methods

Neural Information Processing Systems

We build a theoretical framework for designing and understanding practical metalearning methods that integrates sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. We use our theory to modify several popular meta-learning algorithms and improve their meta-test-time performance on standard problems in few-shot learning and federated learning.


An Equivalence Between Static and Dynamic Regret Minimization

Neural Information Processing Systems

We study the problem of dynamic regret minimization in online convex optimization, in which the objective is to minimize the difference between the cumulative loss of an algorithm and that of an arbitrary sequence of comparators. While the literature on this topic is very rich, a unifying framework for the analysis and design of these algorithms is still missing. In this paper we show that for linear losses, dynamic regret minimization is equivalent to static regret minimization in an extended decision space. Using this simple observation, we show that there is a frontier of lower bounds trading off penalties due to the variance of the losses and penalties due to variability of the comparator sequence, and provide a framework for achieving any of the guarantees along this frontier.


Adaptive Importance Sampling for Finite-Sum Optimization and Sampling with Decreasing Step-Sizes

Neural Information Processing Systems

Reducing the variance of the gradient estimator is known to improve the convergence rate of stochastic gradient-based optimization and sampling algorithms. One way of achieving variance reduction is to design importance sampling strategies. Recently, the problem of designing such schemes was formulated as an online learning problem with bandit feedback, and algorithms with sub-linear static regret were designed. In this work, we build on this framework and propose Avare, a simple and efficient algorithm for adaptive importance sampling for finite-sum optimization and sampling with decreasing step-sizes.


Online Convex Optimisation: The Optimal Switching Regret for all Segmentations Simultaneously

Neural Information Processing Systems

We consider the classic problem of online convex optimisation. Whereas the notion of static regret is relevant for stationary problems, the notion of switching regret is more appropriate for non-stationary problems. A switching regret is defined relative to any segmentation of the trial sequence, and is equal to the sum of the static regrets of each segment. In this paper we show that, perhaps surprisingly, we can achieve the asymptotically optimal switching regret on every possible segmentation simultaneously. Our algorithm for doing so is very efficient: having a space and per-trial time complexity that is logarithmic in the time-horizon.


Leveraging Predictions in Smoothed Online Convex Optimization via Gradient-based Algorithms Na Li SEAS

Neural Information Processing Systems

We consider online convex optimization with time-varying stage costs and additional switching costs. Since the switching costs introduce coupling across all stages, multi-step-ahead (long-term) predictions are incorporated to improve the online performance. However, longer-term predictions tend to suffer from lower quality. Thus, a critical question is: how to reduce the impact of long-term prediction errors on the online performance? To address this question, we introduce a gradient-based online algorithm, Receding Horizon Inexact Gradient (RHIG), and analyze its performance by dynamic regrets in terms of the temporal variation of the environment and the prediction errors. RHIG only considers at most W -stepahead predictions to avoid being misled by worse predictions in the longer term. The optimal choice of W suggested by our regret bounds depends on the tradeoff between the variation of the environment and the prediction accuracy. Additionally, we apply RHIG to a well-established stochastic prediction error model and provide expected regret and concentration bounds under correlated prediction errors. Lastly, we numerically test the performance of RHIG on quadrotor tracking problems.


Leveraging Predictions in Smoothed Online Convex Optimization via Gradient-based Algorithms Na Li SEAS

Neural Information Processing Systems

We consider online convex optimization with time-varying stage costs and additional switching costs. Since the switching costs introduce coupling across all stages, multi-step-ahead (long-term) predictions are incorporated to improve the online performance. However, longer-term predictions tend to suffer from lower quality. Thus, a critical question is: how to reduce the impact of long-term prediction errors on the online performance? To address this question, we introduce a gradient-based online algorithm, Receding Horizon Inexact Gradient (RHIG), and analyze its performance by dynamic regrets in terms of the temporal variation of the environment and the prediction errors. RHIG only considers at most W -stepahead predictions to avoid being misled by worse predictions in the longer term. The optimal choice of W suggested by our regret bounds depends on the tradeoff between the variation of the environment and the prediction accuracy. Additionally, we apply RHIG to a well-established stochastic prediction error model and provide expected regret and concentration bounds under correlated prediction errors. Lastly, we numerically test the performance of RHIG on quadrotor tracking problems.


939314105ce8701e67489642ef4d49e8-AuthorFeedback.pdf

Neural Information Processing Systems

We answer your main questions as follows. Question 1. "Is there any hope to avoid the log(T) blowup in runtime? Answer 1. From our current understanding, the meta-expert aggregation is the standard framework for handling the We will add a remark in the paper to discuss this point more thoroughly. Question 2. "Technically, I think in order for Lemma 4 to hold, f needs to be defined on the whole vector space" The issue has also been identified by Reviewer #3. We will improve the paper writing to make this point more clear. Question 2. "what regret... if... only access to 1 gradient query per step, rather than the two used in OEGD." Answer 2. If the expert-algorithm only has access to 1 gradient per step, O( (1 + P We address your main questions as follows. Question 1. "how would the lower-bound of function appear in your bounds if we assume they are not positive" Question 2. "how would the algorithms / results change if 0 is not in X?" Answer 2. There are three places we use this assumption: About the self-bounding property of smooth functions, you are absolutely correct. For other minor issues, we will carefully revise the paper according to your constructive comments. Below we address your concerns and clarify the misunderstandings. Question 2. "The novelty of the paper is limited.


Near-Optimal Dynamic Regret for Adversarial Linear Mixture MDPs

Neural Information Processing Systems

The interaction is usually modeled as Markov Decision Processes (MDPs). Research on MDPs can be broadly divided into two lines based on the reward generation mechanism. The first line of work [Jaksch et al., 2010, Azar et al., 2013, 2017, He et al., 2021] considers the