The value of prior knowledge in discovering motifs with MEME

AAAI Conferences

The E-step of EM calculates the expected value of the missing information--the probability that a motif occurrence starts in position j of sequence Xi. The formulas used by MEME for the three types of model are given below. Derivations are given elsewhere (Bailey Elkan 1995b).


Fitting a mixture model by expectation maximization to discover motifs in biopolymers

AAAI Conferences

The motif model used by MM says that each position in a subsequence which is an occurrence of the motif is generated by an independent random variable describing a multinomial trial with parameter fi --- (fil,..., fiL). That is, the probability of letter aj appearing in position i in the motif is fli.


Clustering sequence sets for motif discovery

Neural Information Processing Systems

Most of existing methods for DNA motif discovery consider only a single set of sequences to find an over-represented motif. In contrast, we consider multiple sets of sequences where we group sets associated with the same motif into a cluster, assuming that each set involves a single motif. Clustering sets of sequences yields clusters of coherent motifs, improving signal-to-noise ratio or enabling us to identify multiple motifs. We present a probabilistic model for DNA motif discovery where we identify multiple motifs through searching for patterns which are shared across multiple sets of sequences. Our model infers cluster-indicating latent variables and learns motifs simultaneously, where these two tasks interact with each other. We show that our model can handle various motif discovery problems, depending on how to construct multiple sets of sequences. Experiments on three different problems for discovering DNA motifs emphasize the useful behavior and confirm the substantial gains over existing methods where only single set of sequences is considered.


Using Dirichlet Mixture Priors to $ Derive Hidden Markov Models for Protein Families

AAAI Conferences

A Bayesian method for estimating the amino acid distributions in the states of a hidden Markov model (HMM) for a protein family or the colunms of a multiple alignment of that family is introduced. This method uses Dirichlet mixture densities as priors over amino acid distributions. These mixture densities are determined from examination of previously constructed tlMMs or multiple alignments. It is shown that this Bayesian method can improve the quality of ItMMs produced from small training sets. Specific experiments on the EF-hand motif are reported, for which these priors are shown to produce HMMs with higher likelihood on unseen data, and fewer fal positives and false negatives in a database search task.


A Hierarchical Bayesian Markovian Model for Motifs in Biopolymer Sequences

Neural Information Processing Systems

We propose a dynamic Bayesian model for motifs in biopolymer sequences whichcaptures rich biological prior knowledge and positional dependencies in motif structure in a principled way. Our model posits that the position-specific multinomial parameters for monomer distribution aredistributed as a latent Dirichlet-mixture random variable, and the position-specific Dirichlet component is determined by a hidden Markov process. Model parameters can be fit on training motifs using a variational EMalgorithm within an empirical Bayesian framework. Variational inference is also used for detecting hidden motifs. Our model improves overprevious models that ignore biological priors and positional dependence. It has much higher sensitivity to motifs during detection and a notable ability to distinguish genuine motifs from false recurring patterns.