Optuna vs Code Llama: Are LLMs a New Paradigm for Hyperparameter Tuning?
Kochnev, Roman, Goodarzi, Arash Torabi, Bentyn, Zofia Antonina, Ignatov, Dmitry, Timofte, Radu
–arXiv.org Artificial Intelligence
Optimal hyperparameter selection is critical for maximizing the performance of neural networks in computer vision, particularly as architectures become more complex. This work explores the use of large language models (LLMs) for hyperparameter optimization by fine-tuning a parameter-efficient version of Code Llama using LoRA. The resulting model produces accurate and computationally efficient hyperparameter recommendations across a wide range of vision architectures. Unlike traditional methods such as Optuna, which rely on resource-intensive trial-and-error procedures, our approach achieves competitive or superior Root Mean Square Error (RMSE) while substantially reducing computational overhead. Importantly, the models evaluated span image-centric tasks such as classification, detection, and segmentation, fundamental components in many image manipulation pipelines including enhancement, restoration, and style transfer . Our results demonstrate that LLM-based optimization not only rivals established Bayesian methods like Tree-structured Parzen Estimators (TPE), but also accelerates tuning for real-world applications requiring perceptual quality and low-latency processing.
arXiv.org Artificial Intelligence
Sep-30-2025
- Country:
- Asia
- Europe
- Germany (0.04)
- Switzerland (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- Georgia > Fulton County
- Atlanta (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Georgia > Fulton County
- Canada > Ontario
- Genre:
- Research Report > New Finding (1.00)
- Technology: