Which is your favorite Machine Learning Algorithm?

#artificialintelligence

Developed back in the 50s by Rosenblatt and colleagues, this extremely simple algorithm can be viewed as the foundation for some of the most successful classifiers today, including suport vector machines and logistic regression, solved using stochastic gradient descent. The convergence proof for the Perceptron algorithm is one of the most elegant pieces of math I've seen in ML. Most useful: Boosting, especially boosted decision trees. This intuitive approach allows you to build highly accurate ML models, by combining many simple ones. Boosting is one of the most practical methods in ML, it's widely used in industry, can handle a wide variety of data types, and can be implemented at scale.


These Are The Most Elegant, Useful Algorithms In Machine Learning

#artificialintelligence

Developed back in the 50s by Rosenblatt and colleagues, this extremely simple algorithm can be viewed as the foundation for some of the most successful classifiers today, including suport vector machines and logistic regression, solved using stochastic gradient descent. The convergence proof for the Perceptron algorithm is one of the most elegant pieces of math I've seen in ML. Most useful: Boosting, especially boosted decision trees. This intuitive approach allows you to build highly accurate ML models, by combining many simple ones. Boosting is one of the most practical methods in ML, it's widely used in industry, can handle a wide variety of data types, and can be implemented at scale.


GridGain Professional Edition 2.4 Introduces Integrated Machine Learning and Deep Learning in New Continuous Learning Framework, Adds Support for Apache Spark DataFrames - EconoTimes

#artificialintelligence

FOSTER CITY, Calif., March 27, 2018 -- GridGain Systems, provider of enterprise-grade in-memory computing solutions based on Apache Ignite, today announced the immediate availability of GridGain Professional Edition 2.4, a fully supported version of Apache Ignite 2.4. GridGain Professional Edition 2.4 now includes a Continuous Learning Framework, which includes machine learning and a multilayer perceptron (MLP) neural network that enable companies to run machine and deep learning algorithms against their petabyte-scale operational datasets in real-time. Companies can now build and continuously update models at in-memory speeds and with massive horizontal scalability. GridGain Professional Edition 2.4 also enhances the performance of Apache Spark by introducing an API for Apache Spark DataFrames, adding to the existing support for Spark RDDs. GridGain Continuous Learning Framework GridGain Professional Edition 2.4 now includes the first fully supported release of the Apache Ignite integrated machine learning and multilayer perceptron features, making continuous learning using machine learning and deep learning available directly in GridGain.


The Next AI Milestone: Bridging the Semantic Gap – Intuition Machine – Medium

#artificialintelligence

John Launchbury of DARPA has an excellent video that I recommend everyone watch ( viewing just the slides will give one a wrong impression of the content). Statistical Learning -- Where programmers create statistical models for specific problem domains and train them on big data. Contextual Adaptation -- Where systems construct contextual explanatory models for classes of real world phenomena. It's a bit of a simplified presentation because it lumps all of machine learning, Bayesian methods and Deep Learning into a single category. There are many more approaches to AI that don't fit within DARPA's 3 waves.


Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint

arXiv.org Machine Learning

Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, recent advances have focused on inducing sparse constraints at each layer of the DBN. In this paper we present a theoretical approach for sparse constraints in the DBN using the mixed norm for both non-overlapping and overlapping groups. We explore how these constraints affect the classification accuracy for digit recognition in three different datasets (MNIST, USPS, RIMES) and provide initial estimations of their usefulness by altering different parameters such as the group size and overlap percentage.