Tu, Wei-Wei
Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions
Zhang, Lijun, Wang, Guanghui, Tu, Wei-Wei, Zhou, Zhi-Hua
To deal with changing environments, a new performance measure---adaptive regret, defined as the maximum static regret over any interval, is proposed in online learning. Under the setting of online convex optimization, several algorithms have been successfully developed to minimize the adaptive regret. However, existing algorithms lack universality in the sense that they can only handle one type of convex functions and need apriori knowledge of parameters. By contrast, there exist universal algorithms, such as MetaGrad, that attain optimal static regret for multiple types of convex functions simultaneously. Along this line of research, this paper presents the first universal algorithm for minimizing the adaptive regret of convex functions. Specifically, we borrow the idea of maintaining multiple learning rates in MetaGrad to handle the uncertainty of functions, and utilize the technique of sleeping experts to capture changing environments. In this way, our algorithm automatically adapts to the property of functions (convex, exponentially concave, or strongly convex), as well as the nature of environments (stationary or changing). As a by product, it also allows the type of functions to switch between rounds.
Differentiable Neural Architecture Search via Proximal Iterations
Yao, Quanming, Xu, Ju, Tu, Wei-Wei, Zhu, Zhanxing
Neural architecture search (NAS) recently attracts much research attention because of its ability to identify better architectures than handcrafted ones. However, many NAS methods, which optimize the search process in a discrete search space, need many GPU days for convergence. Recently, DARTS, which constructs a differentiable search space and then optimizes it by gradient descent, can obtain high-performance architecture and reduces the search time to several days. However, DARTS is still slow as it updates an ensemble of all operations and keeps only one after convergence. Besides, DARTS can converge to inferior architectures due to the strong correlation among operations. In this paper, we propose a new differentiable Neural Architecture Search method based on Proximal gradient descent (denoted as NASP). Different from DARTS, NASP reformulates the search process as an optimization problem with a constraint that only one operation is allowed to be updated during forward and backward propagation. Since the constraint is hard to deal with, we propose a new algorithm inspired by proximal iterations to solve it. Experiments on various tasks demonstrate that NASP can obtain high-performance architectures with 10 times of speedup on the computational time than DARTS.
AutoML @ NeurIPS 2018 challenge: Design and Results
Escalante, Hugo Jair, Tu, Wei-Wei, Guyon, Isabelle, Silver, Daniel L., Viegas, Evelyne, Chen, Yuqiang, Dai, Wenyuan, Yang, Qiang
Machine learning has achieved great successes in online advertising, recommender systems, financial market analysis, computer vision, computational linguistics, bioinformatics and many other fields. However, its success crucially relies on human machine learning experts, as human experts are involved to some extent, in all systems design stages. In fact, it is still common for humans to take critical decisions in aspects like: converting a real world problem into a machine learning one, data gathering, formatting and preprocessing, feature engineering, selecting or designing model architectures, hyper-parameter tuning, assessment of model performance, deploying online ML systems, among others.