Statistical Learning


Introduction to TensorFlow - Towards Data Science

#artificialintelligence

In my experience, learning to anything useful in computer science has fallen at the strange intersection of theory and practice. It's pretty easy to ignore the amount of depth that the lies under some of the things we code. Machine learning takes that to an extreme, and everyone wants to be a Machine Learning Engineer these days. Elements of Statistical Learning is a fantastic book. If you can get through it you'll know quite a bit, but it doesn't mean much if you're unable to put any of into practice.


Customer Churn Modeling using Machine Learning with parsnip

#artificialintelligence

This article comes from Diego Usai, a student in Business Science University. Diego has completed both 101 (Data Science Foundations) and 201 (Advanced Machine Learning & Business Consulting) courses. Diego shows off his progress in this Customer Churn Tutorial using Machine Learning with parsnip. Diego originally posted the article on his personal website, diegousai.io, Recently I have completed the online course Business Analysis With R focused on applied data and business science with R, which introduced me to a couple of new modelling concepts and approaches.


Exploring the Characterization and Classification of EEG Signals for a Computer-Aided Epilepsy Diagnosis System

#artificialintelligence

Epilepsy occurs when localized electrical activity of neurons suffer from an imbalance. One of the most adequate methods for diagnosing and monitoring is via the analysis of electroencephalographic (EEG) signals. Despite there is a wide range of alternatives to characterize and classify EEG signals for epilepsy analysis purposes, many key aspects related to accuracy and physiological interpretation are still considered as open issues. In this paper, this work performs an exploratory study in order to identify the most adequate frequently-used methods for characterizing and classifying epileptic seizures. In this regard, a comparative study is carried out on several subsets of features using four representative classifiers: Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM).



Machine Learning for a Low-cost Air Pollution Network

#artificialintelligence

We consider the example of a deployment of an air pollution monitoring network in Kampala, an East African city. Air pollution contributes to over three million deaths globally each year(Lelieveld and others, 2015). Kampala has one of the highest concentrations of fine particulate matter (PM 2.5) of any African city Mead (2017) Hence we know little about its distribution or extent. Lower cost devices do exist, but these do not, on their own, provide the accuracy required for decision makers. In our case study, the Kampala network of sensors consists largely of low cost optical particle counters (OPCs) that give estimates of the PM2.5 particulate concentration.


A Quasi-Newton Method Based Vertical Federated Learning Framework for Logistic Regression

#artificialintelligence

Data privacy and security becomes a major concern in building machine learning models from different data providers. Federated learning shows promise by leaving data at providers locally and exchanging encrypted information. This paper studies the vertical federated learning structure for logistic regression where the data sets at two parties have the same sample IDs but own disjoint subsets of features. Existing frameworks adopt the first-order stochastic gradient descent algorithm, which requires large number of communication rounds. To address the communication challenge, we propose a quasi-Newton method based vertical federated learning framework for logistic regression under the additively homomorphic encryption scheme.


Tensorflow 2.0: Solving Classification and Regression Problems

#artificialintelligence

By the end of the 50th epoch, we have training accuracy of 100% while validation accuracy of 98.56%, which is impressive. Let's finally evaluate the performance of our classification model on the test set: Our model achieves an accuracy of 97.39% on the test set. Though it is slightly less than the training accuracy of 100%, it is still very good given the fact that we randomly chose the number of layers and the nodes. You can add more layers to the model with more nodes and see if you can get better results on the validation and test sets. In regression problem, the goal is to predict a continuous value. In this section, you will see how to solve a regression problem with TensorFlow 2.0 The dataset for this problem can be downloaded freely from this link.


AI for Oceans #CSforGood

#artificialintelligence

Levels 2-4 use a pretrained model provided by the TensorFlow MobileNet project. A MobileNet model is a convolutional neural network that has been trained on ImageNet, a dataset of over 14 million images hand-annotated with words such as "balloon" or "strawberry". In order to customize this model with the labeled training data the student generates in this activity, we use a technique called Transfer Learning. Each image in the training dataset is fed to MobileNet, as pixels, to obtain a list of annotations that are most likely to apply to it. Then, for a new image, we feed it to MobileNet and compare its resulting list of annotations to those from the training dataset.


Calculate Similarity -- the most relevant Metrics in a Nutshell

#artificialintelligence

Many data science techniques are based on measuring similarity and dissimilarity between objects. For example, K-Nearest-Neighbors uses similarity to classify new data objects. In Unsupervised Learning, K-Means is a clustering method which uses Euclidean distance to compute the distance between the cluster centroids and it's assigned data points. Recommendation engines use neighborhood based collaborative filtering methods which identify an individual's neighbor based on the similarity/dissimilarity to the other users. In this blog post I will take a look at the most relevant similarity metrics in practice.


NLP Picks Bestsellers – A Lesson in Using NLP for Hidden Feature Extraction

#artificialintelligence

Summary: 99% of our application of NLP has to do with chatbots or translation. This is a very interesting story about expanding the bounds of NLP and feature creation to predict bestselling novels. The authors created over 20,000 NLP features, about 2,700 of which proved to be predictive with a 90% accuracy rate in predicting NYT bestsellers. It's a pretty rare individual who hasn't had a personal experience with NLP (Natural Language Processing). About 99% of those experiences are in the form of chatbots or translators, either text or speech in, and text or speech out.