FNAS: Uncertainty-Aware Fast Neural Architecture Search
Liu, Jihao, Zhang, Ming, Sun, Yangting, Liu, Boxiao, Song, Guanglu, Liu, Yu, Li, Hongsheng
–arXiv.org Artificial Intelligence
Reinforcement learning (RL)-based neural architecture search (NAS) generally guarantees better convergence yet suffers from the requirement of huge computational resources compared with gradient-based approaches, due to the rollout bottleneck - exhaustive training of each sampled architecture on the proxy tasks. In this paper, we propose a general pipeline to accelerate the convergence of the rollout process as well as the RL process in NAS. It is motivated by the interesting observation that both the architecture and the parameter knowledge can be transferred between different search processes and even different tasks. We first introduce an uncertainty-aware critic (value function) in Proximal Policy Optimization (PPO) [27] to take advantage of the architecture knowledge in previous search processes, which stabilizes the training process and reduce the searching time by 4 times. In addition, an architecture knowledge pool together with a block similarity function is proposed to utilize parameter knowledge and reduces the searching time by 2 times. To the best of our knowledge, this is the first method that introduces a block-level weight sharing scheme in RL-based NAS. The block similarity function guarantees a 100% hit ratio with strict fairness [5]. Besides, we show that an off-policy correction factor used in "replay buffer" of RL optimization can further reduce half of the searching time. Experiments on the Mobile Neural Architecture Search (MNAS) [30] search space show that the proposed Fast Neural Architecture Search (FNAS) accelerates the standard RL-based NAS process by 10x (e.g., 20,000 GPU hours to 2,000 GPU hours for MNAS), and guarantees better performance on various vision tasks.
arXiv.org Artificial Intelligence
May-27-2021