Goto

Collaborating Authors

Gelman at the PSA: "Confirmationist and Falsificationist Paradigms in Statistical Practice": Comments & Queries

#artificialintelligence

From inferring a genuine discrepancy from a test hypothesis, you can't go directly to a genuine falsification of, or discrepancy from, the test hypothesis, but you can once you've shown a significant result rarely fails to be brought about (as Fisher required). The next stages may lead to a revised model or hypothesis being warranted with severity; later still, a falsification of a research claim may be well-corroborated. Once the statistical (relativistic) light-bending effect was vouchsafed (by means of statistically rejecting Newtonian null hypotheses), it falsified the Newtonian prediction (of a 0 or half the Einstein deflection effect) and, together with other statistical inferences, led to passing the Einstein effect severely. The large randomized, controlled trials of Hormone Replacement Therapy in 2002 revealed statistically significant increased risks of heart disease. They falsified, first, the nulls of the RCTs, and second, the widely accepted claim (from observational studies) that HRT helps prevent heart disease.I'm skimming details, but the gist is clear.


Statistical Agnostic Mapping: a Framework in Neuroimaging based on Concentration Inequalities

arXiv.org Machine Learning

In the 70s a novel branch of statistics emerged focusing its effort in selecting a function in the pattern recognition problem, which fulfils a definite relationship between the quality of the approximation and its complexity. These data-driven approaches are mainly devoted to problems of estimating dependencies with limited sample sizes and comprise all the empirical out-of sample generalization approaches, e.g. cross validation (CV) approaches. Although the latter are \emph{not designed for testing competing hypothesis or comparing different models} in neuroimaging, there are a number of theoretical developments within this theory which could be employed to derive a Statistical Agnostic (non-parametric) Mapping (SAM) at voxel or multi-voxel level. Moreover, SAMs could relieve i) the problem of instability in limited sample sizes when estimating the actual risk via the CV approaches, e.g. large error bars, and provide ii) an alternative way of Family-wise-error (FWE) corrected p-value maps in inferential statistics for hypothesis testing. In this sense, we propose a novel framework in neuroimaging based on concentration inequalities, which results in (i) a rigorous development for model validation with a small sample/dimension ratio, and (ii) a less-conservative procedure than FWE p-value correction, to determine the brain significance maps from the inferences made using small upper bounds of the actual risk.


Classical Statistics and Statistical Learning in Imaging Neuroscience (PDF Download Available)

#artificialintelligence

All dimensions in the brain data (i.e., voxel variables) are This is where random field theory comes to the rescue. For instance, signals from "brain regions" are


Classical Statistics and Statistical Learning in Imaging Neuroscience

arXiv.org Machine Learning

Neuroimaging research has predominantly drawn conclusions based on classical statistics, including null-hypothesis testing, t-tests, and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity, including cross-validation, pattern classification, and sparsity-inducing regression. These two methodological families used for neuroimaging data analysis can be viewed as two extremes of a continuum. Yet, they originated from different historical contexts, build on different theories, rest on different assumptions, evaluate different outcome metrics, and permit different conclusions. This paper portrays commonalities and differences between classical statistics and statistical learning with their relation to neuroimaging research. The conceptual implications are illustrated in three common analysis scenarios. It is thus tried to resolve possible confusion between classical hypothesis testing and data-guided model estimation by discussing their ramifications for the neuroimaging access to neurobiology.


A Gentle Introduction to Statistical Hypothesis Tests

#artificialintelligence

Data must be interpreted in order to add meaning. We can interpret data by assuming a specific structure our outcome and use statistical methods to confirm or reject the assumption. The assumption is called a hypothesis and the statistical tests used for this purpose are called statistical hypothesis tests. Whenever we want to make claims about the distribution of data or whether one set of results are different from another set of results in applied machine learning, we must rely on statistical hypothesis tests. In this tutorial, you will discover statistical hypothesis testing and how to interpret and carefully state the results from statistical tests.