Which is your favorite Machine Learning Algorithm?

#artificialintelligence

Developed back in the 50s by Rosenblatt and colleagues, this extremely simple algorithm can be viewed as the foundation for some of the most successful classifiers today, including suport vector machines and logistic regression, solved using stochastic gradient descent. The convergence proof for the Perceptron algorithm is one of the most elegant pieces of math I've seen in ML. Most useful: Boosting, especially boosted decision trees. This intuitive approach allows you to build highly accurate ML models, by combining many simple ones. Boosting is one of the most practical methods in ML, it's widely used in industry, can handle a wide variety of data types, and can be implemented at scale.


These Are The Most Elegant, Useful Algorithms In Machine Learning

#artificialintelligence

Developed back in the 50s by Rosenblatt and colleagues, this extremely simple algorithm can be viewed as the foundation for some of the most successful classifiers today, including suport vector machines and logistic regression, solved using stochastic gradient descent. The convergence proof for the Perceptron algorithm is one of the most elegant pieces of math I've seen in ML. Most useful: Boosting, especially boosted decision trees. This intuitive approach allows you to build highly accurate ML models, by combining many simple ones. Boosting is one of the most practical methods in ML, it's widely used in industry, can handle a wide variety of data types, and can be implemented at scale.


OMG - Emotion Challenge Solution

arXiv.org Machine Learning

This short paper describes our solution to the 2018 IEEE World Congress on Computational Intelligence One-Minute Gradual-Emotional Behavior Challenge, whose goal was to estimate continuous arousal and valence values from short videos. We designed four base regression models using visual and audio features, and then used a spectral approach to fuse them to obtain improved performance.


Coding Deep Learning for Beginners -- Linear Regression (Part 3): Training with Gradient Descent

#artificialintelligence

This is the 5th article of series "Coding Deep Learning for Beginners". You will be able to find here links to all articles, agenda, and general information about an estimated release date of next articles on the bottom of the 1st article. They are also available in my open source portfolio -- MyRoadToAI, along with some mini-projects, presentations, tutorials and links. In this article, I will explain the concept of training Machine Learning algorithms with Gradient Descent. Majority of supervised algorithms are taking advantage of it -- especially all Neural Networks.


Extending Machine Learning Algorithms Udemy

@machinelearnbot

Complex statistics in Machine Learning worry a lot of developers. Knowing statistics helps you build strong Machine Learning models that are optimized for a given problem statement. Understand the real-world examples that discuss the statistical side of Machine Learning and familiarize yourself with it. We will use libraries such as scikit-learn, e1071, randomForest, c50, xgboost, and so on.We will discuss the application of frequently used algorithms on various domain problems, using both Python and R programming.It focuses on the various tree-based machine learning models used by industry practitioners.We will also discuss k-nearest neighbors, Naive Bayes, Support Vector Machine and recommendation engine.By the end of the course, you will have mastered the required statistics for Machine Learning Algorithm and will be able to apply your new skills to any sort of industry problem. Pratap Dangeti develops machine learning and deep learning solutions for structured, image, and text data at TCS, in its research and innovation lab in Bangalore.