Goto

Collaborating Authors

 neural automl






Reviews: Transfer Learning with Neural AutoML

Neural Information Processing Systems

This paper applies both multi-task training and transfer learning to AutoML. The paper extends the ideas presented in the Neural Architectura Search (NAS) technique (Barret Zoph and Quoc V. Le. The authors maintain the two-layer solution, with one network "the controller" choosing the architectural parameters for the "child" network which is used to solve the targeted task. The performance of the child network is fed back to the controller network to influence its results. The novelty of this paper is in the way this two-layer solution is used.


Transfer Learning with Neural AutoML

Wong, Catherine, Houlsby, Neil, Lu, Yifeng, Gesmundo, Andrea

Neural Information Processing Systems

We reduce the computational cost of Neural AutoML with transfer learning. Neural AutoML has become popular for the design of deep learning architectures, however, this method has a high computation cost. To address this we propose Transfer Neural AutoML that uses knowledge from prior tasks to speed up network design. We extend RL-based architecture search methods to support parallel training on multiple tasks and then transfer the search strategy to new tasks. On language and image classification data, Transfer Neural AutoML reduces convergence time over single-task training by over an order of magnitude on many tasks.


Transfer Learning with Neural AutoML

Wong, Catherine, Houlsby, Neil, Lu, Yifeng, Gesmundo, Andrea

Neural Information Processing Systems

We reduce the computational cost of Neural AutoML with transfer learning. AutoML relieves human effort by automating the design of ML algorithms. Neural AutoML has become popular for the design of deep learning architectures, however, this method has a high computation cost. To address this we propose Transfer Neural AutoML that uses knowledge from prior tasks to speed up network design. We extend RL-based architecture search methods to support parallel training on multiple tasks and then transfer the search strategy to new tasks. On language and image classification data, Transfer Neural AutoML reduces convergence time over single-task training by over an order of magnitude on many tasks.