Goto

Collaborating Authors

 default




A Proof of Proposition 1 Proof: First, it is straightforward to show that the IPW estimator of the ground truth treatment effect ˆ δ

Neural Information Processing Systems

We proceed to compute the variances of each estimator. The proof also holds for the non-zero mean case trivially. Causal model details for Section 5.2 In Section 5.2, We include a wide range of machine learning-based causal inference methods to evaluate the performance of causal error estimators. Others configs are kept as default. The others are kept as default.



Adaptive Methods for Nonconvex Optimization

Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, Sanjiv Kumar

Neural Information Processing Systems

The first prominent algorithms in this line of research isADAGRAD [7,22], which uses a per-dimension learning rate based on squared pastgradients.ADAGRADachievedsignificant performance gainsincomparison toSGDwhenthe gradientsaresparse.





ImprovingVariationalAutoencoderswithDensity Gap-based Regularization

Neural Information Processing Systems

On that basis, we hypothesize that these two problems stem from the conflict between the KL regularization inELBo andthefunction definition oftheprior distribution. Assuch, wepropose a novel regularization to substitute the KL regularization in ELBo for VAEs, which isbased on the density gapbetween the aggregated posterior distribution and the prior distribution.


BONSAI: Bayesian Optimization with Natural Simplicity and Interpretability

Daulton, Samuel, Eriksson, David, Balandat, Maximilian, Bakshy, Eytan

arXiv.org Machine Learning

Bayesian optimization (BO) is a popular technique for sample-efficient optimization of black-box functions. In many applications, the parameters being tuned come with a carefully engineered default configuration, and practitioners only want to deviate from this default when necessary. Standard BO, however, does not aim to minimize deviation from the default and, in practice, often pushes weakly relevant parameters to the boundary of the search space. This makes it difficult to distinguish between important and spurious changes and increases the burden of vetting recommendations when the optimization objective omits relevant operational considerations. We introduce BONSAI, a default-aware BO policy that prunes low-impact deviations from a default configuration while explicitly controlling the loss in acquisition value. BONSAI is compatible with a variety of acquisition functions, including expected improvement and upper confidence bound (GP-UCB). We theoretically bound the regret incurred by BONSAI, showing that, under certain conditions, it enjoys the same no-regret property as vanilla GP-UCB. Across many real-world applications, we empirically find that BONSAI substantially reduces the number of non-default parameters in recommended configurations while maintaining competitive optimization performance, with little effect on wall time.