Goto

Collaborating Authors

 fisher kernel



W-kernel and essential subspace for frequencist's evaluation of Bayesian estimators

Iba, Yukito

arXiv.org Machine Learning

The posterior covariance matrix W defined by the log-likelihood of each observation plays important roles both in the sensitivity analysis and frequencist's evaluation of the Bayesian estimators. This study focused on the matrix W and its principal space; we term the latter as an essential subspace. First, it is shown that they appear in various statistical settings, such as the evaluation of the posterior sensitivity, assessment of the frequencist's uncertainty from posterior samples, and stochastic expansion of the loss; a key tool to treat frequencist's properties is the recently proposed Bayesian infinitesimal jackknife approximation (Giordano and Broderick (2023)). In the following part, we show that the matrix W can be interpreted as a reproducing kernel; it is named as W-kernel. Using the W-kernel, the essential subspace is expressed as a principal space given by the kernel PCA. A relation to the Fisher kernel and neural tangent kernel is established, which elucidates the connection to the classical asymptotic theory; it also leads to a sort of Bayesian-frequencist's duality. Finally, two applications, selection of a representative set of observations and dimensional reduction in the approximate bootstrap, are discussed. In the former, incomplete Cholesky decomposition is introduced as an efficient method to compute the essential subspace. In the latter, different implementations of the approximate bootstrap for posterior means are compared.


A New Discriminative Kernel From Probabilistic Models

Neural Information Processing Systems

Recently, Jaakkola and Haussler proposed a method for construct(cid:173) ing kernel functions from probabilistic models. Their so called "Fisher kernel" has been combined with discriminative classifiers such as SVM and applied successfully in e.g. Whereas the Fisher kernel (FK) is calculated from the marginal log-likelihood, we propose the TOP kernel derived from Tangent vectors Of Posterior log-odds. Furthermore we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing FK and TOP. In experiments our new discriminative TOP kernel compares favorably to the Fisher kernel.


String Kernels, Fisher Kernels and Finite State Automata

Neural Information Processing Systems

In this paper we show how the generation of documents can be thought of as a k-stage Markov process, which leads to a Fisher ker(cid:173) nel from which the n-gram and string kernels can be re-constructed. The Fisher kernel view gives a more flexible insight into the string kernel and suggests how it can be parametrised in a way that re(cid:173) flects the statistics of the training corpus. Furthermore, the prob(cid:173) abilistic modelling approach suggests extending the Markov pro(cid:173) cess to consider sub-sequences of varying length, rather than the standard fixed-length approach used in the string kernel. We give a procedure for determining which sub-sequences are informative features and hence generate a Finite State Machine model, which can again be used to obtain a Fisher kernel. By adjusting the parametrisation we can also influence the weighting received by the features . In this way we are able to obtain a logarithmic weighting in a Fisher kernel.


A Kullback-Leibler Divergence Based Kernel for SVM Classification in Multimedia Applications

Neural Information Processing Systems

Over the last years significant efforts have been made to develop kernels that can be applied to sequence data such as DNA, text, speech, video and images. The Fisher Kernel and similar variants have been suggested as good ways to combine an underlying generative model in the feature space and discriminant classifiers such as SVM's. In this paper we sug- gest an alternative procedure to the Fisher kernel for systematically find- ing kernel functions that naturally handle variable length sequence data in multimedia domains. In particular for domains such as speech and images we explore the use of kernel functions that take full advantage of well known probabilistic models such as Gaussian Mixtures and sin- gle full covariance Gaussian models. We derive a kernel distance based on the Kullback-Leibler (KL) divergence between generative models.


Model-agnostic out-of-distribution detection using combined statistical tests

Bergamin, Federico, Mattei, Pierre-Alexandre, Havtorn, Jakob D., Senetaire, Hugo, Schmutz, Hugo, Maaløe, Lars, Hauberg, Søren, Frellsen, Jes

arXiv.org Machine Learning

We present simple methods for out-of-distribution detection using a trained generative model. These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model. The idea is to combine a classical parametric test (Rao's score test) with the recently introduced typicality test. These two test statistics are both theoretically well-founded and exploit different sources of information based on the likelihood for the typicality test and its gradient for the score test. We show that combining them using Fisher's method overall leads to a more accurate out-of-distribution test. We also discuss the benefits of casting out-of-distribution detection as a statistical testing problem, noting in particular that false positive rate control can be valuable for practical out-of-distribution detection. Despite their simplicity and generality, these methods can be competitive with model-specific out-of-distribution detection algorithms without any assumptions on the out-distribution.


Discriminative Viewer Identification using Generative Models of Eye Gaze

Makowski, Silvia, Jäger, Lena A., Schwetlick, Lisa, Trukenbrod, Hans, Engbert, Ralf, Scheffer, Tobias

arXiv.org Machine Learning

We study the problem of identifying viewers of arbitrary images based on their eye gaze. Psychological research has derived generative stochastic models of eye movements. In order to exploit this background knowledge within a discriminatively trained classification model, we derive Fisher kernels from different generative models of eye gaze. Experimentally, we find that the performance of the classifier strongly depends on the underlying generative model. Using an SVM with Fisher kernel improves the classification performance over the underlying generative model.


Learning Concave Conditional Likelihood Models for Improved Analysis of Tandem Mass Spectra

Halloran, John T., Rocke, David M.

arXiv.org Machine Learning

The most widely used technology to identify the proteins present in a complex biological sample is tandem mass spectrometry, which quickly produces a large collection of spectra representative of the peptides (i.e., protein subsequences) present in the original sample. In this work, we greatly expand the parameter learning capabilities of a dynamic Bayesian network (DBN) peptide-scoring algorithm, Didea, by deriving emission distributions for which its conditional log-likelihood scoring function remains concave. We show that this class of emission distributions, called Convex Virtual Emissions (CVEs), naturally generalizes the log-sum-exp function while rendering both maximum likelihood estimation and conditional maximum likelihood estimation concave for a wide range of Bayesian networks. Utilizing CVEs in Didea allows efficient learning of a large number of parameters while ensuring global convergence, in stark contrast to Didea's previous parameter learning framework (which could only learn a single parameter using a costly grid search) and other trainable models (which only ensure convergence to local optima). The newly trained scoring function substantially outperforms the state-of-the-art in both scoring function accuracy and downstream Fisher kernel analysis. Furthermore, we significantly improve Didea's runtime performance through successive optimizations to its message passing schedule and derive explicit connections between Didea's new concave score and related MS/MS scoring functions.


Learning Concave Conditional Likelihood Models for Improved Analysis of Tandem Mass Spectra

Halloran, John T., Rocke, David M.

Neural Information Processing Systems

The most widely used technology to identify the proteins present in a complex biological sample is tandem mass spectrometry, which quickly produces a large collection of spectra representative of the peptides (i.e., protein subsequences) present in the original sample. In this work, we greatly expand the parameter learning capabilities of a dynamic Bayesian network (DBN) peptide-scoring algorithm, Didea [25], by deriving emission distributions for which its conditional log-likelihood scoring function remains concave. We show that this class of emission distributions, called Convex Virtual Emissions (CVEs), naturally generalizes the log-sum-exp function while rendering both maximum likelihood estimation and conditional maximum likelihood estimation concave for a wide range of Bayesian networks. Utilizing CVEs in Didea allows efficient learning of a large number of parameters while ensuring global convergence, in stark contrast to Didea's previous parameter learning framework (which could only learn a single parameter using a costly grid search) and other trainable models [12, 13, 14] (which only ensure convergence to local optima). The newly trained scoring function substantially outperforms the state-of-the-art in both scoring function accuracy and downstream Fisher kernel analysis. Furthermore, we significantly improve Didea's runtime performance through successive optimizations to its message passing schedule and derive explicit connections between Didea's new concave score and related MS/MS scoring functions.


Learning Concave Conditional Likelihood Models for Improved Analysis of Tandem Mass Spectra

Halloran, John T., Rocke, David M.

Neural Information Processing Systems

The most widely used technology to identify the proteins present in a complex biological sample is tandem mass spectrometry, which quickly produces a large collection of spectra representative of the peptides (i.e., protein subsequences) present in the original sample. In this work, we greatly expand the parameter learning capabilities of a dynamic Bayesian network (DBN) peptide-scoring algorithm, Didea[25], by deriving emission distributions for which its conditional log-likelihood scoring function remains concave. We show that this class of emission distributions,called Convex Virtual Emissions (CVEs), naturally generalizes the log-sum-exp function while rendering both maximum likelihood estimation and conditional maximum likelihood estimation concave for a wide range of Bayesian networks. Utilizing CVEs in Didea allows efficient learning of a large number of parameters while ensuring global convergence, in stark contrast to Didea's previous parameter learning framework (which could only learn a single parameter using a costly grid search) and other trainable models [12, 13, 14] (which only ensure convergence to local optima). The newly trained scoring function substantially outperforms the state-of-the-art in both scoring function accuracy and downstream Fisher kernel analysis. Furthermore, we significantly improve Didea's runtime performance through successive optimizations to its message passing schedule and derive explicit connections between Didea's new concave score and related MS/MS scoring functions.