### K-Nearest Neighbors explained Codementor

Here on Codementor I usually see lots of students and developers trying to get into Machine Learning confused with complicated topics they are facing at the very beginning of their journey. I want to make a deep yet understandable introduction to the algorithm which is so simple and elegant that you would like it. If you are a Machine Learning engineer but have a limited understanding of this one, it could be also useful to read it. I was working as a software developer for years and everyone around me was talking about this brand new data science and machine learning thing (I understood that there is nothing new on this planet later), so I've decided to take masters studies in the University to get known to it. Our first module was a general introductory course to Data Science and I remember myself sitting and trying to understand what's going on.

### Creating Your First Machine Learning Classifier Model with Sklearn

But you don't know where to start, or perhaps you have read some theory, but don't know how to implement what you have learned. This tutorial will help you break the ice, and walk you through the complete process from importing and analysing a dataset to implementing and training a few different well known classification algorithms and assessing their performance. I'll be using a minimal amount of discrete mathematics, and aim to express details using intuition, and concrete examples instead of dense mathematical formulas. You can read why here. We will be classifying flower-species based on their sepal and petal characteristics using the Iris flower dataset which you can download from Kaggle here. Kaggle, if you haven't heard of it, has a ton of cool open datasets, and is a place where data scientists share their work which can be a valuable resource when learning.

### Fitting a Neural Network Using Randomized Optimization in Python

Python's mlrose package provides functionality for implementing some of the most popular randomization and search algorithms, and applying them to a range of different optimization problem domains. In this tutorial, we will discuss how mlrose can be used to find the optimal weights for machine learning models, such as neural networks and regression models. That is, to solve the machine learning weight optimization problem. This is the third in a series of three tutorials about using mlrose to solve randomized optimization problems. Part 1 can be found here and Part 2 can be found here.

### Creating Your First Machine Learning Classifier with Sklearn

But you don't know where to start, or perhaps you have read some theory, but don't know how to implement what you have learned. This tutorial will help you break the ice, and walk you through the complete process from importing and analysing a dataset to implementing and training a few different well known classification algorithms and assessing their performance. I'll be using a minimal amount of discrete mathematics, and aim to express details using intuition, and concrete examples instead of dense mathematical formulas. You can read why here. We will be classifying flower-species based on their sepal and petal characteristics using the Iris flower dataset which you can download from Kaggle here. Kaggle, if you haven't heard of it, has a ton of cool open datasets, and is a place where data scientists share their work which can be a valuable resource when learning.

### Predictive modeling, supervised machine learning, and pattern classification

A Support Vector Machine (SVM) is a classification method that samples hyperplanes which separate between two or multiple classes. Eventually, the hyperplane with the highest margin is retained, where "margin" is defined as the minimum distance from sample points to the hyperplane. The sample point(s) that form margin are called support vectors and establish the final SVM model. Bayes classifiers are based on a statistical model (i.e., Bayes theorem: calculating posterior probabilities based on the prior probability and the so-called likelihood). A Naive Bayes classifier assumes that all attributes are conditionally independent, thereby, computing the likelihood is simplified to the product of the conditional probabilities of observing individual attributes given a particular class label. Artificial Neural Networks (ANN) are graph-like classifiers that mimic the structure of a human or animal "brain" where the interconnected nodes represent the neurons. Decision tree classifiers are tree like graphs, where nodes in the graph test certain conditions on a particular set of features, and branches split the decision towards the leaf nodes. Leaves represent lowest level in the graph and determine the class labels. Optimal tree are trained by minimizing Gini impurity, or maximizing information gain.