l2o-dm-aug
- North America > United States > Texas > Travis County > Austin (0.05)
- North America > Canada > Ontario > Toronto (0.05)
- Asia > China > Shanghai > Shanghai (0.05)
Supplementary Materials: Training Stronger Baselines for Learning to Optimize Tianlong Chen
L2O-DM-CL donates the enhanced L2O-DM with our proposed curriculum learning technique. All learnable optimizers are trained with 5000 epochs. The results are presented in figure A2. We observe that the model trained by curriculum learning outperforms the two baselines (i.e., L2O-DM and L2O-DM-AUG) with Curves are the average of ten runs. Evaluation performance of our enhanced L2O and previous SOT As (i.e., log training loss v.s.
- North America > United States > Texas > Travis County > Austin (0.05)
- North America > Canada > Ontario > Toronto (0.05)
- Asia > China > Shanghai > Shanghai (0.05)
Training Stronger Baselines for Learning to Optimize
Chen, Tianlong, Zhang, Weiyi, Zhou, Jingyang, Chang, Shiyu, Liu, Sijia, Amini, Lisa, Wang, Zhangyang
Learning to optimize (L2O) has gained increasing attention since classical optimizers require laborious problem-specific design and hyperparameter tuning. However, there is a gap between the practical demand and the achievable performance of existing L2O models. Specifically, those learned optimizers are applicable to only a limited class of problems, and often exhibit instability. With many efforts devoted to designing more sophisticated L2O models, we argue for another orthogonal, under-explored theme: the training techniques for those L2O models. We show that even the simplest L2O model could have been trained much better. We first present a progressive training scheme to gradually increase the optimizer unroll length, to mitigate a well-known L2O dilemma of truncation bias (shorter unrolling) versus gradient explosion (longer unrolling). We further leverage off-policy imitation learning to guide the L2O learning, by taking reference to the behavior of analytical optimizers. Our improved training techniques are plugged into a variety of state-of-the-art L2O models, and immediately boost their performance, without making any change to their model structures. Especially, by our proposed techniques, an earliest and simplest L2O model can be trained to outperform the latest complicated L2O models on a number of tasks. Our results demonstrate a greater potential of L2O yet to be unleashed, and urge to rethink the recent progress. Our codes are publicly available at: https://github.com/VITA-Group/L2O-Training-Techniques.
- North America > United States > Texas (0.14)
- Asia > China (0.14)
- North America > Canada > Quebec > Montreal (0.14)
- Materials > Chemicals > Industrial Gases > Liquified Gas (0.46)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals > LNG (0.46)
- Energy > Oil & Gas > Midstream (0.46)