regularization coefficient
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Appendix A Algorithm details
A.1 GLASS Algorithm 1 GAN-based latent space search attack ( GLASS) Require: A standard ResNet-18 network is divided into blocks, as shown in Figure 8. From Similarly, for GLASS, we set the learning rate to 1e-2 and the number of iterations to 20,000. Regarding IN, we selected a learning rate of 1e-3 and performed 30 training epochs. The accuracy of each defended model and its corresponding defense hyperparameters are shown in Table 3. Table 3: Details of defense hyperparameters (we set the split point uniformly to Block3). We train 50 distributions for Shredder, maintaining an accuracy of over 77% for all of them. As Figure 10 shows, the upper left curve implies a better privacy-utility trade-off. NoPeek and DISCO achieve the optimal defensive effect on almost all DRAs.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Media (0.34)
- Leisure & Entertainment (0.34)
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
New Bounds for Hyperparameter Tuning of Regression Problems Across Instances
The task of tuning regularization coefficients in regularized regression models with provable guarantees across problem instances still poses a significant challenge in the literature. This paper investigates the sample complexity of tuning regularization parameters in linear and logistic regressions under $\ell_1$ and $\ell_2$-constraints in the data-driven setting. For the linear regression problem, by more carefully exploiting the structure of the dual function class, we provide a new upper bound for the pseudo-dimension of the validation loss function class, which significantly improves the best-known results on the problem. Remarkably, we also instantiate the first matching lower bound, proving our results are tight. For tuning the regularization parameters of logistic regression, we introduce a new approach to studying the learning guarantee via an approximation of the validation loss function class. We examine the pseudo-dimension of the approximation class and construct a uniform error bound between the validation loss function class and its approximation, which allows us to instantiate the first learning guarantee for the problem of tuning logistic regression regularization coefficients.
Regularization Learning Networks: Deep Learning for Tabular Datasets
Despite their impressive performance, Deep Neural Networks (DNNs) typically underperform Gradient Boosting Trees (GBTs) on many tabular-dataset learning tasks. W e propose that applying a different regularization coefficient to each weight might boost the performance of DNNs by allowing them t o make more use of the more relevant inputs. However, this will lead to an int ractable number of hyperparameters. Here, we introduce Regularization Learning Networks (RLNs), which overcome this challenge by introducing an efficient hy perparameter tuning scheme which minimizes a new Counterfactual Loss . Our results show that RLNs significantly improve DNNs on tabular datasets, and achieve comparable results to GBTs, with the best performance achieved with an ensemble that combines GBTs and RLNs. RLNs produce extremely sparse networks, elim inating up to 99 .
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Health & Medicine > Health Care Technology (0.48)
- Health & Medicine > Therapeutic Area (0.30)