A survey on multi-objective hyperparameter optimization algorithms for Machine Learning
Morales-Hernández, Alejandro, Van Nieuwenhuyse, Inneke, Gonzalez, Sebastian Rojas
–arXiv.org Artificial Intelligence
Hyperparameter optimization (HPO) is a necessary step to ensure the best possible performance of Machine Learning (ML) algorithms. Several methods have been developed to perform HPO; most of these are focused on optimizing one performance measure (usually an error-based measure), and the literature on such single-objective HPO problems is vast. Recently, though, algorithms have appeared which focus on optimizing multiple conflicting objectives simultaneously. This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms, distinguishing between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both. We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv.org Artificial Intelligence
Dec-8-2021
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.93)
- Industry:
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Evolutionary Systems (1.00)
- Neural Networks > Deep Learning (1.00)
- Performance Analysis > Accuracy (1.00)
- Statistical Learning (1.00)
- Natural Language > Machine Translation (0.92)
- Representation & Reasoning
- Agents (0.93)
- Optimization (1.00)
- Search (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence