secondary unit
Estimating Total Effects in Bipartite Experiments with Spillovers and Partial Eligibility
Tan, Albert, Bayati, Mohsen, Nordlund, James, Istomin, Roman
We study randomized experiments in bipartite systems where only a subset of treatment-side units are eligible for assignment while all units continue to interact, generating interference. We formalize eligibility-constrained bipartite experiments and define estimands aligned with full deployment: the Primary Total Treatment Effect (PTTE) on eligible units and the Secondary Total Treatment Effect (STTE) on ineligible units. Under randomization within the eligible set, we give identification conditions and develop interference-aware ensemble estimators that combine exposure mappings, generalized propensity scores, and flexible machine learning. We further introduce a projection that links treatment- and outcome-level estimands; this mapping is exact under a Linear Additive Edges condition and enables estimation on the (typically much smaller) treatment side with deterministic aggregation to outcomes. In simulations with known ground truth across realistic exposure regimes, the proposed estimators recover PTTE and STTE with low bias and variance and reduce the bias that could arise when interference is ignored. Two field experiments illustrate practical relevance: our method corrects the direction of expected interference bias for a pre-specified metric in both studies and reverses the sign and significance of the primary decision metric in one case.
- North America > United States > New York (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > Strength High (0.87)
- Health & Medicine (1.00)
- Transportation > Ground > Road (0.47)
Efficient Design of Neural Networks with Random Weights
Single layer feedforward networks with random weights are known for their non-iterative and fast training algorithms and are successful in a variety of classification and regression problems. A major drawback of these networks is that they require a large number of hidden units. In this paper, we propose a technique to reduce the number of hidden units substantially without affecting the accuracy of the networks significantly. We introduce the concept of primary and secondary hidden units. The weights for the primary hidden units are chosen randomly while the secondary hidden units are derived using pairwise combinations of the primary hidden units. Using this technique, we show that the number of hidden units can be reduced by at least one order of magnitude. We experimentally show that this technique leads to significant drop in computations at inference time and has only a minor impact on network accuracy. A huge reduction in computations is possible if slightly lower accuracy is acceptable.
- North America > United States > New York (0.04)
- North America > United States > New Jersey (0.04)
Competition Among Networks Improves Committee Performance
Munro, Paul W., Parmanto, Bambang
ABSTRACT The separation of generalization error into two types, bias and variance (Geman, Bienenstock, Doursat, 1992), leads to the notion of error reduction by averaging over a "committee" of classifiers (Perrone, 1993). Committee perfonnance decreases with both the average error of the constituent classifiers and increases with the degree to which the misclassifications are correlated across the committee. Here, a method for reducing correlations is introduced, that uses a winner-take-all procedure similar to competitive learning to drive the individual networks to different minima in weight space with respect to the training set, such that correlations in generalization perfonnance will be reduced, thereby reducing committee error. 1 INTRODUCTION The problem of constructing a predictor can generally be viewed as finding the right combination of bias and variance (Geman, Bienenstock, Doursat, 1992) to reduce the expected error. Since a neural network predictor inherently has an excessive number of parameters, reducing the prediction error is usually done by reducing variance. Methods for reducing neural network complexity can be viewed as a regularization technique to reduce this variance.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Middle East > Jordan (0.04)
Competition Among Networks Improves Committee Performance
Munro, Paul W., Parmanto, Bambang
Since a neural network predictor inherently has an excessive number of parameters, reducing the prediction error is usually done by reducing variance. Methods for reducing neural network complexity can be viewed as a regularization technique to reduce this variance. Examples of such methods are Optimal Brain Damage (Le Cun et.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Middle East > Jordan (0.04)
Competition Among Networks Improves Committee Performance
Munro, Paul W., Parmanto, Bambang
Since a neural network predictor inherently has an excessive number of parameters, reducing the prediction error is usually done by reducing variance. Methods for reducing neural network complexity can be viewed as a regularization technique to reduce this variance. Examples of such methods are Optimal Brain Damage (Le Cun et.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Middle East > Jordan (0.04)