Goto

Collaborating Authors

 hutter


Bench 201

Neural Information Processing Systems

In recent years, research on Automated Machine Learning (AutoML) [1] has made great strides in the data-driven design of neural network architectures [2, 3] and training hyperparameters [4].




Well-tunedSimpleNetsExcelon TabularDatasets

Neural Information Processing Systems

Weempirically assess theimpact oftheseregularization cocktailsforMLPs ina large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditionalMLmethods,suchasXGBoost.



LearningtoMutatewithHypergradientGuided Population

Neural Information Processing Systems

Toaddress theabovechallenges, wepropose anovelhyperparameter mutation (HPM) scheduling algorithm in this study, which adopts a population based training framework to explicitly learn a trade-off (i.e., a mutation schedule) between using the hypergradient-guided local search and the mutation-driven global search.





Re-ExaminingLinearEmbeddingsfor High-DimensionalBayesianOptimization

Neural Information Processing Systems

Bayesian optimization (BO) is a popular approach to optimize expensive-toevaluate black-box functions. A significant challenge in BO is to scale to highdimensional parameter spaces whileretaining sample efficiency. Asolution considered in existing literature is to embed the high-dimensional space in a lowerdimensional manifold, often via a random linear embedding.