Scaling Laws for Hyperparameter Optimization
–Neural Information Processing Systems
Hyperparameter optimization is an important subfield of machine learning that focuses on tuning the hyperparameters of a chosen algorithm to achieve peak performance. Recently, there has been a stream of methods that tackle the issue of hyperparameter optimization, however, most of the methods do not exploit the dominant power law nature of learning curves for Bayesian optimization. In this work, we propose Deep Power Laws (DPL), an ensemble of neural network models conditioned to yield predictions that follow a power-law scaling pattern. Our method dynamically decides which configurations to pause and train incre-mentally by making use of gray-box evaluations.
Neural Information Processing Systems
Feb-15-2026, 23:23:18 GMT
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- Asia > Middle East
- Jordan (0.04)
- Europe
- France > Hauts-de-France
- Germany > Baden-Württemberg
- Freiburg (0.05)
- Italy > Sicily
- Palermo (0.04)
- North Macedonia > Skopje Statistical Region
- Skopje Municipality > Skopje (0.04)
- Spain > Andalusia
- Cádiz Province > Cadiz (0.04)
- Sweden > Stockholm
- Stockholm (0.04)
- North America > Canada
- Oceania > Australia
- New South Wales > Sydney (0.04)
- South America > Argentina
- Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- Africa > Ethiopia
- Genre:
- Research Report > New Finding (0.68)
- Technology: