Regularization Learning Networks: Deep Learning for Tabular Datasets
–Neural Information Processing Systems
Despite their impressive performance, Deep Neural Networks (DNNs) typically underperform Gradient Boosting Trees (GBTs) on many tabular-dataset learning tasks. W e propose that applying a different regularization coefficient to each weight might boost the performance of DNNs by allowing them t o make more use of the more relevant inputs. However, this will lead to an int ractable number of hyperparameters. Here, we introduce Regularization Learning Networks (RLNs), which overcome this challenge by introducing an efficient hy perparameter tuning scheme which minimizes a new Counterfactual Loss . Our results show that RLNs significantly improve DNNs on tabular datasets, and achieve comparable results to GBTs, with the best performance achieved with an ensemble that combines GBTs and RLNs. RLNs produce extremely sparse networks, elim inating up to 99 .
Neural Information Processing Systems
Nov-20-2025, 16:24:28 GMT
- Country:
- Europe > Denmark
- Capital Region > Copenhagen (0.04)
- North America > Canada
- Europe > Denmark
- Genre:
- Research Report > New Finding (0.54)
- Industry:
- Health & Medicine
- Health Care Technology (0.48)
- Therapeutic Area (0.30)
- Health & Medicine
- Technology: