Goto

Collaborating Authors

generating function


Moment Generating Function for Probability Distribution with Python

#artificialintelligence

This tutorial's code is available on Github and its full implementation as well on Google Colab. Check out our editorial suggestions on the best data science books. We generally use moments in statistics, machine learning, mathematics, and other fields to describe the characteristics of a distribution. Let's say the variable of our interest is X then, moments are X's expected values. Now we are very familiar with the first moment(mean) and the second moment(variance).


Exact marginal inference in Latent Dirichlet Allocation

arXiv.org Machine Learning

Assume we have potential "causes" $z\in Z$, which produce "events" $w$ with known probabilities $\beta(w|z)$. We observe $w_1,w_2,...,w_n$, what can we say about the distribution of the causes? A Bayesian estimate will assume a prior on distributions on $Z$ (we assume a Dirichlet prior) and calculate a posterior. An average over that posterior then gives a distribution on $Z$, which estimates how much each cause $z$ contributed to our observations. This is the setting of Latent Dirichlet Allocation, which can be applied e.g. to topics "producing" words in a document. In this setting usually the number of observed words is large, but the number of potential topics is small. We are here interested in applications with many potential "causes" (e.g. locations on the globe), but only a few observations. We show that the exact Bayesian estimate can be computed in linear time (and constant space) in $|Z|$ for a given upper bound on $n$ with a surprisingly simple formula. We generalize this algorithm to the case of sparse probabilities $\beta(w|z)$, in which we only need to assume that the tree width of an "interaction graph" on the observations is limited. On the other hand we also show that without such limitation the problem is NP-hard.


Sharp Concentration Results for Heavy-Tailed Distributions

arXiv.org Machine Learning

The concentration of measure inequalities have received substantial attention in high-dimensional statistics and machine learning [1]. While concentration inequalities are well-understood for subGaussian and subexponential random variables, in many application areas, such as signal processing [2] and machine learning [3] we need concentration results for sums of random variables with heavier tails. The standard technique, i.e. finding upper bounds for the moment generating function (MGF), clearly fails for heavy-tailed distributions whose moment generating functions do not exist. Furthermore, other techniques, such as Chebyshev's inequality, are incapable of obtaining sharp results. The goal of this paper is to show that under quite general conditions on the tail a simple truncation argument can not only help us use the standard MGF argument for heavy-tailed random variables, but is also capable of obtaining sharp concentration results.


On Biased Random Walks, Corrupted Intervals, and Learning Under Adversarial Design

arXiv.org Machine Learning

We tackle some fundamental problems in probability theory on corrupted random processes on the integer line. We analyze when a biased random walk is expected to reach its bottommost point and when intervals of integer points can be detected under a natural model of noise. We apply these results to problems in learning thresholds and intervals under a new model for learning under adversarial design.


Probabilistic Inference with Generating Functions for Poisson Latent Variable Models

Neural Information Processing Systems

Graphical models with latent count variables arise in a number of fields. Standard exact inference techniques such as variable elimination and belief propagation do not apply to these models because the latent variables have countably infinite support. As a result, approximations such as truncation or MCMC are employed. We present the first exact inference algorithms for a class of models with latent count variables by developing a novel representation of countably infinite factors as probability generating functions, and then performing variable elimination with generating functions. Our approach is exact, runs in pseudo-polynomial time, and is much faster than existing approximate techniques.


Machine Learning Necessary for Deep Learning

#artificialintelligence

An agreed upon definition of machine learning is, a computer program is said to have learned when it's performance measure P at task T improves with experience E. Under the definition of Supervised Learning, we get this diagram. Here the experience would be the training data required to improve the algorithm. In practice we put this data into the Design Matrix. Design Matrix [dəˈzīn ˈmātriks]: term -- if a single input can be represented as a vector, putting all of the training examples, i.e the vectors, into 1 matrix makes the entire input aspects of the training data. This is not all of the experience.


Making Sense of Reinforcement Learning and Probabilistic Inference

arXiv.org Artificial Intelligence

Reinforcement learning (RL) combines a control problem with statistical estimation: the system dynamics are not known to the agent, but can be learned through experience. A recent line of research casts `RL as inference' and suggests a particular framework to generalize the RL problem as probabilistic inference. Our paper surfaces a key shortcoming in that approach, and clarifies the sense in which RL can be coherently cast as an inference problem. In particular, an RL agent must consider the effects of its actions upon future rewards and observations: the exploration-exploitation tradeoff. In all but the most simple settings, the resulting inference is computationally intractable so that practical RL algorithms must resort to approximation. We demonstrate that the popular `RL as inference' approximation can perform poorly in even very basic problems. However, we show that with a small modification the framework does yield algorithms that can provably perform well, and we show that the resulting algorithm is equivalent to the recently proposed K-learning, which we further connect with Thompson sampling.


Leveraging Reinforcement Learning Techniques for Effective Policy Adoption and Validation

arXiv.org Artificial Intelligence

Rewards and punishments in different forms are pervasive and present in a wide variety of decision-making scenarios. By observing the outcome of a sufficient number of repeated trials, one would gradually learn the value and usefulness of a particular policy or strategy. However, in a given environment, the outcomes resulting from different trials are subject to chance influence and variations. In learning about the usefulness of a given policy, significant costs are involved in systematically undertaking the sequential trials; therefore, in most learning episodes, one would wish to keep the cost within bounds by adopting learning stopping rules. In this paper, we examine the deployment of different stopping strategies in given learning environments which vary from highly stringent for mission critical operations to highly tolerant for non-mission critical operations, and emphasis is placed on the former with particular application to aviation safety. In policy evaluation, two sequential phases of learning are identified, and we describe the outcomes variations using a probabilistic model, with closedform expressions obtained for the key measures of performance. Decision rules that map the trial observations to policy choices are also formulated. In addition, simulation experiments are performed, which corroborate the validity of the theoretical results.


Maximum Likelihood Estimation for Learning Populations of Parameters

arXiv.org Machine Learning

Consider a setting with $N$ independent individuals, each with an unknown parameter, $p_i \in [0, 1]$ drawn from some unknown distribution $P^\star$. After observing the outcomes of $t$ independent Bernoulli trials, i.e., $X_i \sim \text{Binomial}(t, p_i)$ per individual, our objective is to accurately estimate $P^\star$. This problem arises in numerous domains, including the social sciences, psychology, health-care, and biology, where the size of the population under study is usually large while the number of observations per individual is often limited. Our main result shows that, in the regime where $t \ll N$, the maximum likelihood estimator (MLE) is both statistically minimax optimal and efficiently computable. Precisely, for sufficiently large $N$, the MLE achieves the information theoretic optimal error bound of $\mathcal{O}(\frac{1}{t})$ for $t < c\log{N}$, with regards to the earth mover's distance (between the estimated and true distributions). More generally, in an exponentially large interval of $t$ beyond $c \log{N}$, the MLE achieves the minimax error bound of $\mathcal{O}(\frac{1}{\sqrt{t\log N}})$. In contrast, regardless of how large $N$ is, the naive "plug-in" estimator for this problem only achieves the sub-optimal error of $\Theta(\frac{1}{\sqrt{t}})$.


Generalized Bregman and Jensen divergences which include some f-divergences

arXiv.org Machine Learning

In this paper, we introduce new classes of divergences by extending the definitions of the Bregman divergence and the skew Jensen divergence. These new divergence classes (g-Bregman divergence and skew g-Jensen divergence) satisfy some properties similar to the Bregman or skew Jensen divergence. We show these g-divergences include divergences which belong to a class of f-divergence (the Hellinger distance, the chi-square divergence and the alpha-divergence in addition to the Kullback-Leibler divergence). Moreover, we derive an inequality between the g-Bregman divergence and the skew g-Jensen divergence and show this inequality is a generalization of Lin's inequality.