Goto

Collaborating Authors

Robust Approximate Optimization for Large Scale Planning Problems

AAAI Conferences

Developing scalable and adaptive algorithms for reasoning and acting under uncertainty is an important area in artificial Intelligence. A large subclass of these problems may be formulated as Markov decision processes and are typically solved by Approximate Dynamic Programming (ADP). While ADP has recently gained traction in many domains, the successful applications often require extensive parameter tuning in order to obtain a sufficiently small approximation error. The goal of my thesis is to develop ADP methods that reduce the need for extensive tuning. I particularly focus on Approximate Linear Programming (ALP), a type of ADP. ALP has a number of theoretical advantages over other approximate dynamic programming methods, but in practice it suffers from the same performance issues as other ADP algorithms. These issues are mostly due to a large approximation error. I analyze the approximation error and propose methods for mitigating it. First, I examine various linear program formulations and their effect on the approximation error. ALP, like other ADP methods, involves sampling, which often significantly contributes to degradation in the solution quality. I analyze the sampling error and propose methods for minimizing it. Finally, the representation used in the approximation plays a crucial role in the performance. I therefore describe approaches to automatically tuning the representation in some common settings.


Revisiting SGD with Increasingly Weighted Averaging: Optimization and Generalization Perspectives

arXiv.org Machine Learning

Stochastic gradient descent (SGD) has been widely studied in the literature from different angles, and is commonly employed for solving many big data machine learning problems. However, the averaging technique, which combines all iterative solutions into a single solution, is still under-explored. While some increasingly weighted averaging schemes have been considered in the literature, existing works are mostly restricted to strongly convex objective functions and the convergence of optimization error. It remains unclear how these averaging schemes affect the convergence of {\it both optimization error and generalization error} (two equally important components of testing error) for {\bf non-strongly convex objectives, including non-convex problems}. In this paper, we {\it fill the gap} by comprehensively analyzing the increasingly weighted averaging on convex, strongly convex and non-convex objective functions in terms of both optimization error and generalization error. In particular, we analyze a family of increasingly weighted averaging, where the weight for the solution at iteration $t$ is proportional to $t^{\alpha}$ ($\alpha > 0$). We show how $\alpha$ affects the optimization error and the generalization error, and exhibit the trade-off caused by $\alpha$. Experiments have demonstrated this trade-off and the effectiveness of polynomially increased weighted averaging compared with other averaging schemes for a wide range of problems including deep learning.


Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back

arXiv.org Machine Learning

In stochastic convex optimization the goal is to minimize a convex function $F(x) \doteq {\mathbf E}_{{\mathbf f}\sim D}[{\mathbf f}(x)]$ over a convex set $\cal K \subset {\mathbb R}^d$ where $D$ is some unknown distribution and each $f(\cdot)$ in the support of $D$ is convex over $\cal K$. The optimization is commonly based on i.i.d.~samples $f^1,f^2,\ldots,f^n$ from $D$. A standard approach to such problems is empirical risk minimization (ERM) that optimizes $F_S(x) \doteq \frac{1}{n}\sum_{i\leq n} f^i(x)$. Here we consider the question of how many samples are necessary for ERM to succeed and the closely related question of uniform convergence of $F_S$ to $F$ over $\cal K$. We demonstrate that in the standard $\ell_p/\ell_q$ setting of Lipschitz-bounded functions over a $\cal K$ of bounded radius, ERM requires sample size that scales linearly with the dimension $d$. This nearly matches standard upper bounds and improves on $\Omega(\log d)$ dependence proved for $\ell_2/\ell_2$ setting by Shalev-Shwartz et al. (2009). In stark contrast, these problems can be solved using dimension-independent number of samples for $\ell_2/\ell_2$ setting and $\log d$ dependence for $\ell_1/\ell_\infty$ setting using other approaches. We further show that our lower bound applies even if the functions in the support of $D$ are smooth and efficiently computable and even if an $\ell_1$ regularization term is added. Finally, we demonstrate that for a more general class of bounded-range (but not Lipschitz-bounded) stochastic convex programs an infinite gap appears already in dimension 2.


An Algorithmic Framework for Computing Validation Performance Bounds by Using Suboptimal Models

arXiv.org Machine Learning

Practical model building processes are often time-consuming because many different models must be trained and validated. In this paper, we introduce a novel algorithm that can be used for computing the lower and the upper bounds of model validation errors without actually training the model itself. A key idea behind our algorithm is using a side information available from a suboptimal model. If a reasonably good suboptimal model is available, our algorithm can compute lower and upper bounds of many useful quantities for making inferences on the unknown target model. We demonstrate the advantage of our algorithm in the context of model selection for regularized learning problems.


Why Does Stagewise Training Accelerate Convergence of Testing Error Over SGD?

arXiv.org Machine Learning

Stagewise training strategy is commonly used for learning neural networks, which uses a stochastic algorithm (e.g., SGD) starting with a relatively large step size (aka learning rate) and geometrically decreasing the step size after a number of iterations. It has been observed that the stagewise SGD has much faster convergence than the vanilla SGD with a polynomial decaying step size in terms of both training error and testing error. But how to explain this phenomenon has been largely ignored by existing studies. This paper provides some theoretical evidence for explaining this faster convergence. In particular, we consider the stagewise training strategy for minimizing empirical risk that satisfies the Polyak-Łojasiewicz condition, which has been observed/proved for neural networks and also holds for a broad family of convex functions. For convex loss functions and "nicebehaviored" non-convexloss functions that are close to a convex function (namely weakly convex functions), we establish faster convergence of stagewise training than the vanilla SGD under the same condition on both training error and testing error. Indeed, the proposed algorithm has additional favorable features that come with theoretical guarantee for the considered non-convex optimization problems, including using explicit algorithmic regularization at each stage, using stagewise averaged solution for restarting, and returning the last stagewise averaged solution as the final solution. To differentiate from commonly used stagewise SGD, we refer to our algorithm as stagewise regularized training algorithm or Start. Of independent interest, the proved testing error bounds for a family of nonconvex lossfunctions are dimensionality and norm independent.