Using Auxiliary Data to Boost Precision in the Analysis of A/B Tests on an Online Educational Platform: New Data and New Results
Sales, Adam C., Prihar, Ethan B., Gagnon-Bartsch, Johann A., Heffernan, Neil T.
Randomized A/B tests within online learning platforms represent an exciting direction in learning sciences. With minimal assumptions, they allow causal effect estimation without confounding bias and exact statistical inference even in small samples. However, often experimental samples and/or treatment effects are small, A/B tests are underpowered, and effect estimates are overly imprecise. Recent methodological advances have shown that power and statistical precision can be substantially boosted by coupling design-based causal estimation to machine-learning models of rich log data from historical users who were not in the experiment. Estimates using these techniques remain unbiased and inference remains exact without any additional assumptions. This paper reviews those methods and applies them to a new dataset including over 250 randomized A/B comparisons conducted within ASSISTments, an online learning platform. We compare results across experiments using four novel deep-learning models of auxiliary data and show that incorporating auxiliary data into causal estimates is roughly equivalent to increasing the sample size by 20% on average, or as much as 50-80% in some cases, relative to t-tests, and by about 10% on average, or as much as 30-50%, compared to cutting-edge machine learning unbiased estimates that use only data from the experiments. We show that the gains can be even larger for estimating subgroup effects, hold even when the remnant is unrepresentative of the A/B test sample, and extend to post-stratification population effects estimators. In randomized A/B tests on an online learning platform, students are randomized between different educational conditions or strategies, and their subsequent educational outcomes of interest are compared between different conditions. For instance, Harrison et al. (2020) studied Data and code used in this work can be found at https://osf.io/k8ph9/. Prior to the students' work, the authors designed four different educational conditions, which differed in how the numbers and symbols in arithmetic expressions were spaced. As students logged on to the platform, during their usual schoolwork, they were each individually randomized to one of the four conditions, and completed their work under that condition.
Jun-9-2023
- Country:
- Europe > United Kingdom
- England
- Cambridgeshire > Cambridge (0.04)
- Durham > Durham (0.04)
- England
- North America > United States
- California > San Diego County
- San Diego (0.04)
- Michigan (0.04)
- New York > New York County
- New York City (0.04)
- California > San Diego County
- Europe > United Kingdom
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report