Learning Graphical Models


Top 5 Skills Needed to Become a Machine Learning Engineer Analytics Insight

#artificialintelligence

We are living in the realm of people and machines. People have been developing and gaining from their past experience for many years. Then again, the period of machines and robots have quite recently started. The eventual fate of machine is tremendous and is past our extent of creative ability. We leave this extraordinary responsibility on the shoulder of a specific individual to be precise, Machine Learning Engineer.


Posterior Probability

#artificialintelligence

In statistics, the posterior probability expresses how likely a hypothesis is given a particular set of data. This contrasts with the likelihood function, which is represented as P(D H). This distinction is more of an interpretation rather than a mathematical property as both have the form of conditional probability. In order to calculate the posterior probability, we use Bayes theorem, which is discussed below. Bayes theorem, which is the probability of a hypothesis given some prior observable data, relies on the use of likelihood P(D H) alongside the prior P(H) and marginal likelihood P(D) in order to calculate the posterior P(H D).


Dealing with the Lack of Data in Machine Learning

#artificialintelligence

In many projects, I realized that companies have fantastic business AI ideas but slowly become frustrated when they realize that they don't have enough data… However, solutions do exist! My goal in this article is to briefly introduce you to some of them (the ones that I used the most) rather than listing all existing solutions. This problem of data scarcity is really important since data is at the core of any AI projects. The dataset size is often responsible for poor performances in ML projects. Most of the time, data related issues are the main reason why great AI projects cannot be achieved.


Probability and Statistics explained in the context of deep learning

#artificialintelligence

This article is intended for beginners in deep learning who wish to gain knowledge about probability and statistics and also as a reference for practitioners. In my previous article, I wrote about the concepts of linear algebra for deep learning in a top down approach ( link for the article) (If you do not have enough idea about linear algebra, please read that first).The same top down approach is used here.Providing the description of use cases first and then the concepts. All the example code uses python and numpy.Formulas are provided as images for reuse. Probability is the science of quantifying uncertain things.Most of machine learning and deep learning systems utilize a lot of data to learn about patterns in the data.Whenever data is utilized in a system rather than sole logic, uncertainty grows up and whenever uncertainty grows up, probability becomes relevant. By introducing probability to a deep learning system, we introduce common sense to the system.Otherwise the system would be very brittle and will not be useful.In deep learning, several models like bayesian models, probabilistic graphical models, hidden markov models are used.They depend entirely on probability concepts.


Dynamic Infinite Relational Model for Time-varying Relational Data Analysis

Neural Information Processing Systems

We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like communities in social networks. Our proposed model abstracts observed time-varying object-object relationships into relationships between object clusters. We extend the infinite Hidden Markov model to follow dynamic and time-sensitive changes in the structure of the relational data and to estimate a number of clusters simultaneously. We show the usefulness of the model through experiments with synthetic and real-world data sets. Papers published at the Neural Information Processing Systems Conference.


Inter-time segment information sharing for non-homogeneous dynamic Bayesian networks

Neural Information Processing Systems

Conventional dynamic Bayesian networks (DBNs) are based on the homogeneous Markov assumption, which is too restrictive in many practical applications. Various approaches to relax the homogeneity assumption have therefore been proposed in the last few years. The present paper aims to improve the flexibility of two recent versions of non-homogeneous DBNs, which either (i) suffer from the need for data discretization, or (ii) assume a time-invariant network structure. Allowing the network structure to be fully flexible leads to the risk of overfitting and inflated inference uncertainty though, especially in the highly topical field of systems biology, where independent measurements tend to be sparse. In the present paper we investigate three conceptually different regularization schemes based on inter-segment information sharing.


Latent Variable Models for Predicting File Dependencies in Large-Scale Software Development

Neural Information Processing Systems

When software developers modify one or more files in a large code base, they must also identify and update other related files. Many file dependencies can be detected by mining the development history of the code base: in essence, groups of related files are revealed by the logs of previous workflows. From data of this form, we show how to detect dependent files by solving a problem in binary matrix completion. We explore different latent variable models (LVMs) for this problem, including Bernoulli mixture models, exponential family PCA, restricted Boltzmann machines, and fully Bayesian approaches. We evaluate these models on the development histories of three large, open-source software systems: Mozilla Firefox, Eclipse Subversive, and Gimp.


Near-Optimal Bayesian Active Learning with Noisy Observations

Neural Information Processing Systems

We tackle the fundamental problem of Bayesian active learning with noise, where we need to adaptively select from a number of expensive tests in order to identify an unknown hypothesis sampled from a known prior distribution. In the case of noise-free observations, a greedy algorithm called generalized binary search (GBS) is known to perform near-optimally. We show that if the observations are noisy, perhaps surprisingly, GBS can perform very poorly. We develop EC2, a novel, greedy active learning algorithm and prove that it is competitive with the optimal policy, thus obtaining the first competitiveness guarantees for Bayesian active learning with noisy observations. Our bounds rely on a recently discovered diminishing returns property called adaptive submodularity, generalizing the classical notion of submodular set functions to adaptive policies.


Learning Efficient Markov Networks

Neural Information Processing Systems

We present an algorithm for learning high-treewidth Markov networks where inference is still tractable. This is made possible by exploiting context specific independence and determinism in the domain. The class of models our algorithm can learn has the same desirable properties as thin junction trees: polynomial inference, closed form weight learning, etc., but is much broader. Our algorithm searches for a feature that divides the state space into subspaces where the remaining variables decompose into independent subsets (conditioned on the feature or its negation) and recurses on each subspace/subset of variables until no useful new features can be found. We provide probabilistic performance guarantees for our algorithm under the assumption that the maximum feature length is k (the treewidth can be much larger) and dependences are of bounded strength.


Bayesian models in R

#artificialintelligence

If there was something that always frustrated me was not fully understanding Bayesian inference. Sometime last year, I came across an article about a TensorFlow-supported R package for Bayesian analysis, called greta. Back then, I searched for greta tutorials and stumbled on this blog post that praised a textbook called Statistical Rethinking: A Bayesian Course with Examples in R and Stan by Richard McElreath. I had found a solution to my lingering frustration so I bought a copy straight away. I spent the last few months reading it cover to cover and solving the proposed exercises, which are heavily based on the rethinking package. I cannot recommend it highly enough to whoever seeks a solid grip on Bayesian statistics, both in theory and application. This post ought to be my most gratifying blogging experience so far, in that I am essentially reporting my own recent learning. I am convinced this will make the storytelling all the more effective. As a demonstration, the female cuckoo reproductive output data recently analysed by Riehl et al., 2019 [1] will be modelled using In the process, we will conduct the MCMC sampling, visualise posterior distributions, generate predictions and ultimately assess the influence of social parasitism in female reproductive output. You should have some familiarity with standard statistical models.