PyData Carolinas 2016 Presentation: Deep Finch? A Continued Comparison of Machine Learning Models to Label Birdsong Syllables

#artificialintelligence

Songbirds provide a model system that neuroscientists use to understand how the brain learns and controls speech and similar skills. Much like infants learning to speak from their parents, songbirds learn their song from a tutor and practice it millions of times before reaching maturity. Also like humans, songbirds have evolved special brain regions for learning and producing their vocalizations. These newly-evolved brain regions in songbirds, known as the song system, are found within broader brain areas shared by birds and humans across evolution. So by studying how the song system works, we can learn about our own brains.


Which is your favorite Machine Learning Algorithm?

#artificialintelligence

Developed back in the 50s by Rosenblatt and colleagues, this extremely simple algorithm can be viewed as the foundation for some of the most successful classifiers today, including suport vector machines and logistic regression, solved using stochastic gradient descent. The convergence proof for the Perceptron algorithm is one of the most elegant pieces of math I've seen in ML. Most useful: Boosting, especially boosted decision trees. This intuitive approach allows you to build highly accurate ML models, by combining many simple ones. Boosting is one of the most practical methods in ML, it's widely used in industry, can handle a wide variety of data types, and can be implemented at scale.


These Are The Most Elegant, Useful Algorithms In Machine Learning

#artificialintelligence

Developed back in the 50s by Rosenblatt and colleagues, this extremely simple algorithm can be viewed as the foundation for some of the most successful classifiers today, including suport vector machines and logistic regression, solved using stochastic gradient descent. The convergence proof for the Perceptron algorithm is one of the most elegant pieces of math I've seen in ML. Most useful: Boosting, especially boosted decision trees. This intuitive approach allows you to build highly accurate ML models, by combining many simple ones. Boosting is one of the most practical methods in ML, it's widely used in industry, can handle a wide variety of data types, and can be implemented at scale.


Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence

arXiv.org Machine Learning

Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are popular methods for training the weights of Restricted Boltzmann Machines. However, both methods use an approximate method for sampling from the model distribution. As a side effect, these approximations yield significantly different biases and variances for stochastic gradient estimates of individual data points. It is well known that CD yields a biased gradient estimate. In this paper we however show empirically that CD has a lower stochastic gradient estimate variance than exact sampling, while the mean of subsequent PCD estimates has a higher variance than exact sampling. The results give one explanation to the finding that CD can be used with smaller minibatches or higher learning rates than PCD.