Goto

Collaborating Authors

 high probability


Beyond Worst-case: A Probabilistic Analysis of Affine Policies in Dynamic Optimization

Neural Information Processing Systems

Affine policies (or control) are widely used as a solution approach in dynamic optimization where computing an optimal adjustable solution is usually intractable. While the worst case performance of affine policies can be significantly bad, the empirical performance is observed to be near-optimal for a large class of problem instances. For instance, in the two-stage dynamic robust optimization problem with linear covering constraints and uncertain right hand side, the worst-case approximation bound for affine policies is $O(\sqrt m)$ that is also tight (see Bertsimas and Goyal (2012)), whereas observed empirical performance is near-optimal. In this paper, we aim to address this stark-contrast between the worst-case and the empirical performance of affine policies. In particular, we show that affine policies give a good approximation for the two-stage adjustable robust optimization problem with high probability on random instances where the constraint coefficients are generated i.i.d.



Bounds for the smallest eigenvalue of the NTK for arbitrary spherical data of arbitrary dimension

Neural Information Processing Systems

While initial breakthroughs on the convergence of gradient optimization in neural networks (Li & Liang, 2018; Du et al., 2019a; Allen-Zhu et al., 2019) required unrealistic conditions on the







Context-lumpable stochastic bandits

Neural Information Processing Systems

Consider a recommendation platform that interacts with a finite set of users in an online fashion. Users arrive at the platform and receive a recommendation.