Goto

Collaborating Authors

 bayesian hierarchical


Tree-Guided MCMC Inference for Normalized Random Measure Mixture Models

Neural Information Processing Systems

Normalized random measures (NRMs) provide a broad class of discrete random measures that are often used as priors for Bayesian nonparametric models. Dirichlet process is a well-known example of NRMs. Most of posterior inference methods for NRM mixture models rely on MCMC methods since they are easy to implement and their convergence is well studied. However, MCMC often suffers from slow convergence when the acceptance rate is low. Tree-based inference is an alternative deterministic posterior inference method, where Bayesian hierarchical clustering (BHC) or incremental Bayesian hierarchical clustering (IBHC) have been developed for DP or NRM mixture (NRMM) models, respectively.


Fusion of Gaussian Processes Predictions with Monte Carlo Sampling

Ajirak, Marzieh, Waxman, Daniel, Llorente, Fernando, Djuric, Petar M.

arXiv.org Machine Learning

In science and engineering, we often work with models designed for accurate prediction of variables of interest. Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes. In this paper, we operate within the Bayesian paradigm, relying on Gaussian processes as our models. These models generate predictive probability density functions (pdfs), and the objective is to integrate them systematically, employing both linear and log-linear pooling. We introduce novel approaches for log-linear pooling, determining input-dependent weights for the predictive pdfs of the Gaussian processes. The aggregation of the pdfs is realized through Monte Carlo sampling, drawing samples of weights from their posterior. The performance of these methods, as well as those based on linear pooling, is demonstrated using a synthetic dataset.


Bayes-xG: Player and Position Correction on Expected Goals (xG) using Bayesian Hierarchical Approach

Scholtes, Alexander, Karakuş, Oktay

arXiv.org Artificial Intelligence

This study employs Bayesian methodologies to explore the influence of player or positional factors in predicting the probability of a shot resulting in a goal, measured by the expected goals (xG) metric. Utilising publicly available data from StatsBomb, Bayesian hierarchical logistic regressions are constructed, analysing approximately 10,000 shots from the English Premier League to ascertain whether positional or player-level effects impact xG. The findings reveal positional effects in a basic model that includes only distance to goal and shot angle as predictors, highlighting that strikers and attacking midfielders exhibit a higher likelihood of scoring. However, these effects diminish when more informative predictors are introduced. Nevertheless, even with additional predictors, player-level effects persist, indicating that certain players possess notable positive or negative xG adjustments, influencing their likelihood of scoring a given chance. The study extends its analysis to data from Spain's La Liga and Germany's Bundesliga, yielding comparable results. Additionally, the paper assesses the impact of prior distribution choices on outcomes, concluding that the priors employed in the models provide sound results but could be refined to enhance sampling efficiency for constructing more complex and extensive models feasibly.


Bayesian hierarchical stacking: Some models are (somewhere) useful « Statistical Modeling, Causal Inference, and Social Science

#artificialintelligence

Stacking is a widely used model averaging technique that asymptotically yields optimal predictions among linear averages. We show that stacking is most effective when model predictive performance is heterogeneous in inputs, and we can further improve the stacked mixture with a hierarchical model. We generalize stacking to Bayesian hierarchical stacking. The model weights are varying as a function of data, partially-pooled, and inferred using Bayesian inference. We further incorporate discrete and continuous inputs, other structured priors, and time series and longitudinal data.


Tree-Guided MCMC Inference for Normalized Random Measure Mixture Models

Lee, Juho, Choi, Seungjin

Neural Information Processing Systems

Normalized random measures (NRMs) provide a broad class of discrete random measures that are often used as priors for Bayesian nonparametric models. Dirichlet process is a well-known example of NRMs. Most of posterior inference methods for NRM mixture models rely on MCMC methods since they are easy to implement and their convergence is well studied. However, MCMC often suffers from slow convergence when the acceptance rate is low. Tree-based inference is an alternative deterministic posterior inference method, where Bayesian hierarchical clustering (BHC) or incremental Bayesian hierarchical clustering (IBHC) have been developed for DP or NRM mixture (NRMM) models, respectively.


Statistical comparison of classifiers through Bayesian hierarchical modelling

Corani, Giorgio, Benavoli, Alessio, Demšar, Janez, Mangili, Francesca, Zaffalon, Marco

arXiv.org Machine Learning

Usually one compares the accuracy of two competing classifiers via null hypothesis significance tests (nhst). Yet the nhst tests suffer from important shortcomings, which can be overcome by switching to Bayesian hypothesis testing. We propose a Bayesian hierarchical model which jointly analyzes the cross-validation results obtained by two classifiers on multiple data sets. It returns the posterior probability of the accuracies of the two classifiers being practically equivalent or significantly different. A further strength of the hierarchical model is that, by jointly analyzing the results obtained on all data sets, it reduces the estimation error compared to the usual approach of averaging the cross-validation results obtained on a given data set.


Efficient hierarchical clustering for continuous data

Henao, Ricardo, Lucas, Joseph E.

arXiv.org Machine Learning

Learning hierarchical structures from observed data is a common practice in many knowledge domains. Examples include phylogenies and signaling pathways in biology, language models in linguistics, etc. Agglomerative clustering is still the most popular approach to hierarchical clustering due to its efficiency, ease of implementation and a wide range of possible distance metrics. However, because it is algorithmic in nature, there is no principled way to that agglomerative clustering can be used as a building block in more complex models. Bayesian priors for structure learning on the other hand, are perfectly suited to be employed in larger models. As an example, several authors have proposed using hierarchical structure priors to model correlation in factor models (Rai and Daume III, 2009; Henao et al., 2012; Zhang et al., 2011). Ricardo Henao is Postdoctoral Associate and Joseph E. Lucas is Assistant Research Professor at the Institute for Genome Sciences and Policy (IGSP), Duke University, Durham, NC 27710.


Bayesian Rose Trees

Blundell, Charles, Teh, Yee Whye, Heller, Katherine A.

arXiv.org Machine Learning

Hierarchical structure is ubiquitous in data across many domains. There are many hierarchical clustering methods, frequently used by domain experts, which strive to discover this structure. However, most of these methods limit discoverable hierarchies to those with binary branching structure. This limitation, while computationally convenient, is often undesirable. In this paper we explore a Bayesian hierarchical clustering algorithm that can produce trees with arbitrary branching structure at each node, known as rose trees. We interpret these trees as mixtures over partitions of a data set, and use a computationally efficient, greedy agglomerative algorithm to find the rose trees which have high marginal likelihood given the data. Lastly, we perform experiments which demonstrate that rose trees are better models of data than the typical binary trees returned by other hierarchical clustering algorithms.


Bayesian Agglomerative Clustering with Coalescents

Teh, Yee Whye, Daumé, Hal III, Roy, Daniel

arXiv.org Machine Learning

We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman's coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over others, and demonstrate our approach in document clustering and phylolinguistics.


Bayesian Agglomerative Clustering with Coalescents

Teh, Yee W., III, Hal Daume, Roy, Daniel M.

Neural Information Processing Systems

We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman's coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over the state-of-the-art, and demonstrate our approach in document clustering and phylolinguistics.