default
- North America > United States > Maryland (0.04)
- North America > United States > Wisconsin (0.04)
- North America > United States > California (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
- Information Technology (0.92)
- Law (0.67)
A Proof of Proposition 1 Proof: First, it is straightforward to show that the IPW estimator of the ground truth treatment effect ˆ δ
We proceed to compute the variances of each estimator. The proof also holds for the non-zero mean case trivially. Causal model details for Section 5.2 In Section 5.2, We include a wide range of machine learning-based causal inference methods to evaluate the performance of causal error estimators. Others configs are kept as default. The others are kept as default.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (4 more...)
- North America > United States > California (0.04)
- Asia > Thailand (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology (0.92)
- Banking & Finance (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- (3 more...)
ImprovingVariationalAutoencoderswithDensity Gap-based Regularization
On that basis, we hypothesize that these two problems stem from the conflict between the KL regularization inELBo andthefunction definition oftheprior distribution. Assuch, wepropose a novel regularization to substitute the KL regularization in ELBo for VAEs, which isbased on the density gapbetween the aggregated posterior distribution and the prior distribution.
BONSAI: Bayesian Optimization with Natural Simplicity and Interpretability
Daulton, Samuel, Eriksson, David, Balandat, Maximilian, Bakshy, Eytan
Bayesian optimization (BO) is a popular technique for sample-efficient optimization of black-box functions. In many applications, the parameters being tuned come with a carefully engineered default configuration, and practitioners only want to deviate from this default when necessary. Standard BO, however, does not aim to minimize deviation from the default and, in practice, often pushes weakly relevant parameters to the boundary of the search space. This makes it difficult to distinguish between important and spurious changes and increases the burden of vetting recommendations when the optimization objective omits relevant operational considerations. We introduce BONSAI, a default-aware BO policy that prunes low-impact deviations from a default configuration while explicitly controlling the loss in acquisition value. BONSAI is compatible with a variety of acquisition functions, including expected improvement and upper confidence bound (GP-UCB). We theoretically bound the regret incurred by BONSAI, showing that, under certain conditions, it enjoys the same no-regret property as vanilla GP-UCB. Across many real-world applications, we empirically find that BONSAI substantially reduces the number of non-default parameters in recommended configurations while maintaining competitive optimization performance, with little effect on wall time.
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Hauts-de-France > Nord > Lille (0.04)