The objective of this course is to give you a wholistic understanding of machine learning, covering theory, application, and inner workings of supervised, unsupervised, and deep learning algorithms. In this series, we'll be covering linear regression, K Nearest Neighbors, Support Vector Machines (SVM), flat clustering, hierarchical clustering, and neural networks. For each major algorithm that we cover, we will discuss the high level intuitions of the algorithms and how they are logically meant to work. Next, we'll apply the algorithms in code using real world data sets along with a module, such as with Scikit-Learn. Finally, we'll be diving into the inner workings of each of the algorithms by recreating them in code, from scratch, ourselves, including all of the math involved.

The popular Data Science competition website Kaggle has an ongoing competition to solve the problem of earthquake prediction. Given a dataset of seismographic activity from a laboratory simulation, participants are asked to create a predictive model for earthquakes. In this video, I'll attempt the challenge as a way to teach 3 concepts; the Data Science mindset, Categorical Boosting, and Support Vector Regression models. I'll be coding this using python from start to finish in the online Google colab environment. Thats what keeps me going.

Deep learning is well known to be very amenable to GPU acceleration. Accelerating "traditional" machine learning methods like logistic regression, linear regression, and support vector machines with GPUs at scale, has, however, been challenging. Today I am very proud to share a major breakthrough that IBM Research has made in this critical area. A team out of our Zurich IBM Research lab beat a previous performance benchmark set for a machine learning workload by Google by 46 times. The research team trained a logistic regression classifier to predict clicks on advertisements using a Terabyte-scale data set that consists of online advertising click-thru data, containing 4.2 billion training examples and 1 million features.

"Speaker: Christopher Fonnesbeck This intermediate-level tutorial will provide students with hands-on experience applying practical statistical modeling methods on real data. Unlike many introductory statistics courses, we will not be applying ""cookbook"" methods that are easy to teach, but often inapplicable; instead, we will learn some foundational statistical methods that can be applied generally to a wide variety of problems: maximum likelihood, bootstrapping, linear regression, and other modern techniques. The tutorial will start with a short introduction on data manipulation and cleaning using [pandas](http://pandas.pydata.org/), Slightly more advanced topics include bootstrapping (for estimating uncertainty around estimates) and flexible linear regression methods using Bayesian methods. By using and modifying hand-coded implementations of these techniques, students will gain an understanding of how each method works.

Machine Learning is no longer just a buzzword, it is all around us: from protecting your email, to automatically tagging friends in pictures, to predicting what movies you like. Computer vision is one of today's most exciting application fields of Machine Learning, with Deep Learning driving innovative systems such as self-driving cars and Google's DeepMind. OpenCV lies at the intersection of these topics, providing a comprehensive open-source library for classic as well as state-of-the-art computer vision and Machine Learning algorithms. In combination with Python Anaconda, you will have access to all the open-source computing libraries you could possibly ask for. Machine Learning for OpenCV begins by introducing you to the essential concepts of statistical learning, such as classification and regression.