Goto

Collaborating Authors

 elasticnet



Provably tuning the ElasticNet across instances

Neural Information Processing Systems

An important unresolved challenge in the theory of regularization is to set the regularization coefficients of popular techniques like the ElasticNet with general provable guarantees. We consider the problem of tuning the regularization parameters of Ridge regression, LASSO, and the ElasticNet across multiple problem instances, a setting that encompasses both cross-validation and multi-task hyperparameter optimization. We obtain a novel structural result for the ElasticNet which characterizes the loss as a function of the tuning parameters as a piecewise-rational function with algebraic boundaries. We use this to bound the structural complexity of the regularized loss functions and show generalization guarantees for tuning the ElasticNet regression coefficients in the statistical setting. We also consider the more challenging online learning setting, where we show vanishing average expected regret relative to the optimal parameter pair. We further extend our results to tuning classification algorithms obtained by thresholding regression fits regularized by Ridge, LASSO, or ElasticNet. Our results are the first general learning-theoretic guarantees for this important class of problems that avoid strong assumptions on the data distribution. Furthermore, our guarantees hold for both validation and popular information criterion objectives.







Provably tuning the ElasticNet across instances

Neural Information Processing Systems

An important unresolved challenge in the theory of regularization is to set the regularization coefficients of popular techniques like the ElasticNet with general provable guarantees. We consider the problem of tuning the regularization parameters of Ridge regression, LASSO, and the ElasticNet across multiple problem instances, a setting that encompasses both cross-validation and multi-task hyperparameter optimization. We obtain a novel structural result for the ElasticNet which characterizes the loss as a function of the tuning parameters as a piecewise-rational function with algebraic boundaries. We use this to bound the structural complexity of the regularized loss functions and show generalization guarantees for tuning the ElasticNet regression coefficients in the statistical setting. We also consider the more challenging online learning setting, where we show vanishing average expected regret relative to the optimal parameter pair.


Dam Volume Prediction Model Development Using ML Algorithms

Retief, Hugo, Andarcia, Mariangel Garcia, Dickens, Chris, Ghosh, Surajit

arXiv.org Artificial Intelligence

However, accurate predictive models are essential for their operation, especially when dealing with fluctuating environmental conditions and increased demand. Traditional hydrological models often struggle to capture the complexity of such systems. The advent of machine learning (ML) offers new opportunities to enhance predictive capabilities by utilizing large datasets and advanced algorithms (Maity et al., 202 4) . This work aims to develop a machine - learning model that predicts dam volume using features such as water area, physical dam attributes, and other characteristics, including full supply capacity. Multiple models were iteratively built to improve predictive accuracy and performance comparison, each incorporating additional features to refine the outputs . Accurately monitoring reservoir storage is challenging since in - situ data are often unavailable; therefore, remote sensing observations of water extent and height combined with data - driven models are i ncreasingly used for reservoir volume estimation ( Ghosh et al., 2014; Hou et al., 2021) . This study seeks to enhance the precision of dam volume estimates, providing a valuable tool for decision - makers in water management.


Provably tuning the ElasticNet across instances

Neural Information Processing Systems

An important unresolved challenge in the theory of regularization is to set the regularization coefficients of popular techniques like the ElasticNet with general provable guarantees. We consider the problem of tuning the regularization parameters of Ridge regression, LASSO, and the ElasticNet across multiple problem instances, a setting that encompasses both cross-validation and multi-task hyperparameter optimization. We obtain a novel structural result for the ElasticNet which characterizes the loss as a function of the tuning parameters as a piecewise-rational function with algebraic boundaries. We use this to bound the structural complexity of the regularized loss functions and show generalization guarantees for tuning the ElasticNet regression coefficients in the statistical setting. We also consider the more challenging online learning setting, where we show vanishing average expected regret relative to the optimal parameter pair.