Directed Networks


Machine Learning using Python : Learn Hands-On

#artificialintelligence

Learn to use Python, the ideal programming language for Machine Learning, with this comprehensive course from Hands-On System. Python plays a important role in the adoption of Machine Learning (ML) in the business environment. Now a day's Machine Learning is one of the most sought after skills in industry. After completion of this course students will understand and apply the concepts of machine learning and applied statistics for real world problems. The topics we will be covering in this course are: Python libraries for data manipulation and visualization such as numpy, matplotlib and pandas.


Adventures With Artificial Intelligence and Machine Learning

#artificialintelligence

Since October of last year I have had the opportunity to work with an startup working on automated machine learning and I thought that I would share some thoughts on the experience and the details of what one might want to consider around the start of a journey with a "data scientist in a box". I'll start by saying that machine learning and'artificial intelligence has almost forced itself into my work several times in the past eighteen months, all in slightly different ways. The first brush was back in June 2018 when one of the developers I was working with wanted to demonstrate to me a scoring model for loan applications based on the analysis of some other transactional data that indicated loans that had been previously granted. The model had no explanation and no details other than the fact that it allowed you to stitch together a transactional dataset which it assessed using a naïve Bayes algorithm. We had a run at showing this to a wider audience but the palate for examination seemed low and I suspect that in the end the real reason was we didn't have real data and only had a conceptual problem to be solved.


Adventures With Artificial Intelligence and Machine Learning

#artificialintelligence

Since October of last year I have had the opportunity to work with an startup working on automated machine learning and I thought that I would share some thoughts on the experience and the details of what one might want to consider around the start of a journey with a "data scientist in a box". I'll start by saying that machine learning and'artificial intelligence has almost forced itself into my work several times in the past eighteen months, all in slightly different ways. The first brush was back in June 2018 when one of the developers I was working with wanted to demonstrate to me a scoring model for loan applications based on the analysis of some other transactional data that indicated loans that had been previously granted. The model had no explanation and no details other than the fact that it allowed you to stitch together a transactional dataset which it assessed using a naïve Bayes algorithm. We had a run at showing this to a wider audience but the palate for examination seemed low and I suspect that in the end the real reason was we didn't have real data and only had a conceptual problem to be solved.


Ricky Costa, CEO of Quantum Stat – Interview Series

#artificialintelligence

What initially got you interested in artificial intelligence? I was reading a book on probability when I came across a famous theorem. At the time, I naively wondered if I could apply this theorem into a natural language problem I was attempting to solve at work. As it turns out, the algorithm already existed unbeknownst to me, it was called the Naïve Bayes, a very famous and simple generative model used in classical machine learning. That theorem was Bayes theorem.


Adventures With Artificial Intelligence and Machine Learning

#artificialintelligence

Since October of last year I have had the opportunity to work with an startup working on automated machine learning and I thought that I would share some thoughts on the experience and the details of what one might want to consider around the start of a journey with a "data scientist in a box". I'll start by saying that machine learning and'artificial intelligence has almost forced itself into my work several times in the past eighteen months, all in slightly different ways. The first brush was back in June 2018 when one of the developers I was working with wanted to demonstrate to me a scoring model for loan applications based on the analysis of some other transactional data that indicated loans that had been previously granted. The model had no explanation and no details other than the fact that it allowed you to stitch together a transactional dataset which it assessed using a naïve Bayes algorithm. We had a run at showing this to a wider audience but the palate for examination seemed low and I suspect that in the end the real reason was we didn't have real data and only had a conceptual problem to be solved.


Mathematics Behind AI & Machine Learning

#artificialintelligence

Let's face reality, mathematics is far from being enjoyable. To learn it, we often lack time, and most importantly, motivation. Why do we need all these symbols and a bunch of figures? It turns out, a lot of sense. Especially if you have something to do with machine learning.




Optimal models of sound localization by barn owls

Neural Information Processing Systems

Sound localization by barn owls is commonly modeled as a matching procedure where localization cues derived from auditory inputs are compared to stored templates. While the matching models can explain properties of neural responses, no model explains how the owl resolves spatial ambiguity in the localization cues to produce accurate localization near the center of gaze. Here, we examine two models for the barn owl's sound localization behavior. First, we consider a maximum likelihood estimator in order to further evaluate the cue matching model. Second, we consider a maximum a posteriori estimator to test if a Bayesian model with a prior that emphasizes directions near the center of gaze can reproduce the owl's localization behavior.


MAP Estimation for Graphical Models by Likelihood Maximization

Neural Information Processing Systems

Computing a {\em maximum a posteriori} (MAP) assignment in graphical models is a crucial inference problem for many practical applications. Several provably convergent approaches have been successfully developed using linear programming (LP) relaxation of the MAP problem. We present an alternative approach, which transforms the MAP problem into that of inference in a finite mixture of simple Bayes nets. We then derive the Expectation Maximization (EM) algorithm for this mixture that also monotonically increases a lower bound on the MAP assignment until convergence. The update equations for the EM algorithm are remarkably simple, both conceptually and computationally, and can be implemented using a graph-based message passing paradigm similar to max-product computation.