A Related Work

Neural Information Processing Systems 

Organization In this supplementary file, we provide in-depth descriptions of the materials that are not covered in the main paper, and report additional experimental results. The document is organized as follows: Section A-Related work. Neural Architecture Search (NAS) was introduced to ease the process of manually designing complex neural networks. Early NAS [1] efforts employed a brute force approach by training candidate architectures and using their accuracy as a proxy for discovering superior designs. One-shot NAS methods [5, 6, 7] further reduced the cost by training large supernetworks and identifying high-accuracy subnetworks, often generated from pre-trained models. Nevertheless, as search spaces expand with architectural innovations [8, 9], more efficient methods are necessary to predict neural network accuracy in vast design spaces. Recent mathematical programming (MP) based NAS methods [10, 11] are noteworthy, as they transform multi-objective NAS problems into mathematical programming solutions.

Duplicate Docs Excel Report

Similar Docs  Excel Report  more

TitleSimilaritySource
None found