Goto

Collaborating Authors

[Perspective] Measurement error and the replication crisis

Science

Measurement error adds noise to predictions, increases uncertainty in parameter estimates, and makes it more difficult to discover new phenomena or to distinguish among competing theories. A common view is that any study finding an effect under noisy conditions provides evidence that the underlying effect is particularly strong and robust. Yet, statistical significance conveys very little information when measurements are noisy. In noisy research settings, poor measurement can contribute to exaggerated estimates of effect size. This problem and related misunderstandings are key components in a feedback loop that perpetuates the replication crisis in science.


Measurement error and the replication crisis

#artificialintelligence

Alison McCook from Retraction Watch interviewed Eric Loken and me regarding our recent article, "Measurement error and the replication crisis." We talked about why traditional statistics are often counterproductive to research in the human sciences. Retraction Watch: Your article focuses on the "noise" that's present in research studies. What is "noise" and how is it created during an experiment? Andrew Gelman: Noise is random error that interferes with our ability to observe a clear signal.


Measurement Error in Nutritional Epidemiology: A Survey

arXiv.org Machine Learning

This article reviews bias-correction models for measurement error of exposure variables in the field of nutritional epidemiology. Measurement error usually attenuates estimated slope towards zero. Due to the influence of measurement error, inference of parameter estimate is conservative and confidence interval of the slope parameter is too narrow. Bias-correction in estimators and confidence intervals are of primary interest. We review the following bias-correction models: regression calibration methods, likelihood based models, missing data models, simulation based methods, nonparametric models and sampling based procedures.


Performance Measurement for Deep Bayesian Neural Network

arXiv.org Machine Learning

Deep Bayesian neural network has aroused a great attention in recent years since it combines the benefits of deep neural network and probability theory. Because of this, the network can make predictions and quantify the uncertainty of the predictions at the same time, which is important in many life-threatening areas. However, most of the recent researches are mainly focusing on making the Bayesian neural network easier to train, and proposing methods to estimate the uncertainty. I notice there are very few works that properly discuss the ways to measure the performance of the Bayesian neural network. Although accuracy and average uncertainty are commonly used for now, they are too general to provide any insight information about the model. In this paper, we would like to introduce more specific criteria and propose several metrics to measure the model performance from different perspectives, which include model calibration measurement, data rejection ability and uncertainty divergence for samples from the same and different distributions.


Data-driven Algorithm Selection and Parameter Tuning: Two Case studies in Optimization and Signal Processing

arXiv.org Machine Learning

Machine learning algorithms typically rely on optimization subroutines and are well-known to provide very effective outcomes for many types of problems. Here, we flip the reliance and ask the reverse question: can machine learning algorithms lead to more effective outcomes for optimization problems? Our goal is to train machine learning methods to automatically improve the performance of optimization and signal processing algorithms. As a proof of concept, we use our approach to improve two popular data processing subroutines in data science: stochastic gradient descent and greedy methods in compressed sensing. We provide experimental results that demonstrate the answer is ``yes'', machine learning algorithms do lead to more effective outcomes for optimization problems, and show the future potential for this research direction.