Goto

Collaborating Authors

Frequentists Fight Back

@machinelearnbot

Frequentist-leaning statisticians have numerous responses to Bayesian criticisms that may not be widely known. Broadly speaking, these rebuttals assert that Bayesian criticisms of Frequentist approaches rely on circular arguments, are self-refuting, rest mostly on semantics, or are mainly of interest to academics and irrelevant in practice. Below, I've briefly summarized the ones I'm aware of from memory and in my own words. The meaning of the term is often unclear. Is it objective Bayes, subjective Bayes, approximate Bayes, empirical Bayes, or all of the above?


Bayesian Learning for Machine Learning: Part 1 - Introduction to Bayesian Learning - DZone AI

#artificialintelligence

In this article, I will provide a basic introduction to Bayesian learning and explore topics such as frequentist statistics, the drawbacks of the frequentist method, Bayes's theorem (introduced with an example), and the differences between the frequentist and Bayesian methods using the coin flip experiment as the example. To begin, let's try to answer this question: what is the frequentist method? When we flip a coin, there are two possible outcomes -- heads or tails. Of course, there is a third rare possibility where the coin balances on its edge without falling onto either side, which we assume is not a possible outcome of the coin flip for our discussion. We conduct a series of coin flips and record our observations i.e. the number of the heads (or tails) observed for a certain number of coin flips. In this experiment, we are trying to determine the fairness of the coin, using the number of heads (or tails) that we observe.


How Bayesian Machine Learning Works

#artificialintelligence

Classical statistics is said to follow the frequentist approach because it interprets probability as the relative frequency of an event over the long run that is, after observing many trials. In the context of probabilities, an event is a combination of one or more elementary outcomes of an experiment, such as any of six equal results in rolls of two dice or an asset price dropping by 10 percent or more on a given day.


Vital Statistics You Never Learned… Because They're Never Taught

@machinelearnbot

KG: Starting from the beginning, what is statistics and how did it come about? Could you give us a short definition and history of the discipline? In a brief nutshell statistics began as a way to understand the workings of states, productivity, life expectancy, agricultural yields, etc., and to make estimates of things from samples (an statistical example of the latter dates back to the 5th century BCE in Athens). Concerning a definition for statistics, it is a field that is a science unto itself and that benefits all other fields and everyday life. What is unique about statistics is its proven tools for decision making in the face of uncertainty, understanding sources of variation and bias, and most importantly, statistical thinking.


The Future of Data Analysis in the Neurosciences

arXiv.org Machine Learning

Neuroscience is undergoing faster changes than ever before. Over 100 years our field qualitatively described and invasively manipulated single or few organisms to gain anatomical, physiological, and pharmacological insights. In the last 10 years neuroscience spawned quantitative big-sample datasets on microanatomy, synaptic connections, optogenetic brain-behavior assays, and high-level cognition. While growing data availability and information granularity have been amply discussed, we direct attention to a routinely neglected question: How will the unprecedented data richness shape data analysis practices? Statistical reasoning is becoming more central to distill neurobiological knowledge from healthy and pathological brain recordings. We believe that large-scale data analysis will use more models that are non-parametric, generative, mixing frequentist and Bayesian aspects, and grounded in different statistical inferences.