Developed back in the 50s by Rosenblatt and colleagues, this extremely simple algorithm can be viewed as the foundation for some of the most successful classifiers today, including suport vector machines and logistic regression, solved using stochastic gradient descent. The convergence proof for the Perceptron algorithm is one of the most elegant pieces of math I've seen in ML. Most useful: Boosting, especially boosted decision trees. This intuitive approach allows you to build highly accurate ML models, by combining many simple ones. Boosting is one of the most practical methods in ML, it's widely used in industry, can handle a wide variety of data types, and can be implemented at scale.

Quantum computing has received significant attention as a next-generation computing technology due to its potential speed and ability to solve problems considered too difficult for classical computers, as reflected in the recent discussion on Quantum Supremacy. Grid sees quantum computing not only as a tool for solving optimization and quantum chemical computation problems, but also as a tool for AI (Machine Learning, Deep Learning, etc.) calculations, such as feature extraction. Previous works have announced the successful implementation of machine learning-related algorithms, such as principal component analysis and auto-encoders, on quantum computers. This work announces the development of a gradient descent (backpropagation) algorithm, a method commonly used in machine learning for neural network parameter optimization, for use on NISQ quantum computers. Due to the non-linear nature of quantum bits (qubits), Grid proposes that this algorithm can be used to perform the feature extraction and representation calculations that deep learning methods employ.

In this longish post, I have tried to explain Deep Learning starting from familiar ideas like machine learning. This approach forms a part of my forthcoming book. You can connect with me on Linkedin to know more about the book. I have used this approach in my teaching. It is based on'learning by exception,' i.e. understanding one concept and it's limitations and then understanding how the subsequent concept overcomes that limitation.