Maximum Entropy
Can we use gradient desent method in maximum entropy model?
I see a lot of implementations use GIS or IIS to train the maximum entropy model. Can we use gradient desent method? If we can use it, why most tutorial directly tell GIS or IIS methos, but do not show the simple gradient desent method to train maximum entropy model? As we know, softmax regression is equivalent to the maxent model, but I never heard GIS or IIS in softmax. Is there a toy code use simple gradient desent method to train maxent model?
Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods
The well known maximum-entropy principle due to Jaynes, which states that given mean parameters, the maximum entropy distribution matching them is in an exponential family, has been very popular in machine learning due to its "Occam's razor" interpretation. Unfortunately, calculating the potentials in the maximum-entropy distribution is intractable \cite{bresler2014hardness}. We provide computationally efficient versions of this principle when the mean parameters are pairwise moments: we design distributions that approximately match given pairwise moments, while having entropy which is comparable to the maximum entropy distribution matching those moments. We additionally provide surprising applications of the approximate maximum entropy principle to designing provable variational methods for partition function calculations for Ising models without any assumptions on the potentials of the model. More precisely, we show that in every temperature, we can get approximation guarantees for the log-partition function comparable to those in the low-temperature limit, which is the setting of optimization of quadratic forms over the hypercube. \cite{alon2006approximating}
Propositional Probabilistic Reasoning at Maximum Entropy Modulo Theories
Wilhelm, Marco (Technische Universität Dortmund) | Kern-Isberner, Gabriele (Technische Universität Dortmund) | Ecke, Andreas (Technische Universität Dortmund)
The principle of maximum entropy (MaxEnt principle) provides a valuable methodology for reasoning with probabilistic conditional knowledge bases realizing an idea of information economy in the sense of adding a minimal amount of assumed information. The conditional structure of such a knowledge base allows for classifying possible worlds regarding their influence on the MaxEnt distribution. In this paper, we present an algorithm that determines these equivalence classes and computes their cardinality by performing satisfiability tests of propositional formulas built upon the premises and conclusions of the conditionals. An example illustrates how the output of our algorithm can be used to simplify calculations when drawing nonmonotonic inferences under maximum entropy. For this, we use a characterization of the MaxEnt distribution in terms of conditional structure that completely abstracts from the propositional logic underlying the conditionals.
Regression, Logistic Regression and Maximum Entropy
One of the most important tasks in Machine Learning are the Classification tasks (a.k.a. Classification is used to make an accurate prediction of the class of entries in the test set (a dataset of which the entries have not been labelled yet) with the model which was constructed from a training set. You could think of classifying crime in the field of Pre-Policing, classifying patients in the Health sector, classifying houses in the Real-Estate sector. Another field in which classification is big, is Natural Lanuage Processing (NLP). This is the field of science with the goal to makes machines (computers) understand (written) human language.
Confidence-Constrained Maximum Entropy Framework for Learning from Multi-Instance Data
Behmardi, Behrouz, Briggs, Forrest, Fern, Xiaoli Z., Raich, Raviv
Multi-instance data, in which each object (bag) contains a collection of instances, are widespread in machine learning, computer vision, bioinformatics, signal processing, and social sciences. We present a maximum entropy (ME) framework for learning from multi-instance data. In this approach each bag is represented as a distribution using the principle of ME. We introduce the concept of confidence-constrained ME (CME) to simultaneously learn the structure of distribution space and infer each distribution. The shared structure underlying each density is used to learn from instances inside each bag. The proposed CME is free of tuning parameters. We devise a fast optimization algorithm capable of handling large scale multi-instance data. In the experimental section, we evaluate the performance of the proposed approach in terms of exact rank recovery in the space of distributions and compare it with the regularized ME approach. Moreover, we compare the performance of CME with Multi-Instance Learning (MIL) state-of-the-art algorithms and show a comparable performance in terms of accuracy with reduced computational complexity.
Maximum Entropy Kernels for System Identification
Carli, Francesca Paola, Chen, Tianshi, Ljung, Lennart
A new nonparametric approach for system identification has been recently proposed where the impulse response is modeled as the realization of a zero-mean Gaussian process whose covariance (kernel) has to be estimated from data. In this scheme, quality of the estimates crucially depends on the parametrization of the covariance of the Gaussian process. A family of kernels that have been shown to be particularly effective in the system identification framework is the family of Diagonal/Correlated (DC) kernels. Maximum entropy properties of a related family of kernels, the Tuned/Correlated (TC) kernels, have been recently pointed out in the literature. In this paper we show that maximum entropy properties indeed extend to the whole family of DC kernels. The maximum entropy interpretation can be exploited in conjunction with results on matrix completion problems in the graphical models literature to shed light on the structure of the DC kernel. In particular, we prove that the DC kernel admits a closed-form factorization, inverse and determinant. These results can be exploited both to improve the numerical stability and to reduce the computational complexity associated with the computation of the DC estimator.
Unification of field theory and maximum entropy methods for learning probability densities
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory is thus seen to provide a natural test of the validity of the maximum entropy null hypothesis. Bayesian field theory also returns a lower entropy density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided. Based on these results, I argue that Bayesian field theory is poised to provide a definitive solution to the density estimation problem in one dimension.
On the Maximum Entropy Property of the First-Order Stable Spline Kernel and its Implications
A new nonparametric approach for system identification has been recently proposed where the impulse response is seen as the realization of a zero--mean Gaussian process whose covariance, the so--called stable spline kernel, guarantees that the impulse response is almost surely stable. Maximum entropy properties of the stable spline kernel have been pointed out in the literature. In this paper we provide an independent proof that relies on the theory of matrix extension problems in the graphical model literature and leads to a closed form expression for the inverse of the first order stable spline kernel as well as to a new factorization in the form $UWU^\top$ with $U$ upper triangular and $W$ diagonal. Interestingly, all first--order stable spline kernels share the same factor $U$ and $W$ admits a closed form representation in terms of the kernel hyperparameter, making the factorization computationally inexpensive. Maximum likelihood properties of the stable spline kernel are also highlighted. These results can be applied both to improve the stability and to reduce the computational complexity associated with the computation of stable spline estimators.
A Novel Methodology for Processing Probabilistic Knowledge Bases Under Maximum Entropy
Kern-Isberner, Gabriele (TU Dortmund University) | Wilhelm, Marco (TU Dortmund University) | Beierle, Christoph (University of Hagen)
Probabilistic reasoning under the so-called principle of maximum entropy is a viable and convenient alternative to Bayesian networks, relieving the user from providing complete (local) probabilistic information and observing rigorous conditional independence assumptions. In this paper, we present a novel approach to performing computational MaxEnt reasoning that makes use of symbolic computations instead of graph-based techniques. Given a probabilistic knowledge base, we encode the MaxEnt optimization problem into a system of polynomial equations, and then apply Gröbner basis theory to find MaxEnt inferences as solutions to the polynomials. We illustrate our approach with an example of a knowledge base that represents findings on fraud detection in enterprises.
Implementation of a Transformation System for Relational Probabilistic Knowledge Bases Simplifying the Maximum Entropy Model Computation
Beierle, Christoph (University of Hagen) | Höhnerbach, Markus (University of Hagen) | Marto, Marcus (University of Hagen)
The maximum entropy (ME) model of a knowledge base R consisting of relational probabilistic conditionals can be defined referring to the set of all ground instances of the conditionals. The logic FO-PCL employs the notion of parametric uniformity for avoiding the full grounding of R. We present an implementation of a rule system transforming R into a knowledge base that is parametrically uniform and has the same ME model, simplifying the ME model computation. The implementation provides different execution and evaluation modes, including the generation of all possible solutions.