Goto

Collaborating Authors

Results


K-Nearest Neighbors, Naive Bayes, and Decision Tree in 10 Minutes

#artificialintelligence

Unlike linear models and SVM (see Part 1), some machine learning models are really complex to learn from their mathematical formulation. Fortunately, they can be understood by following a step-by-step process they execute on a small dummy dataset. This way, you can uncover machine learning models under the hood without the "math bottleneck". You will learn three more models in this story after Part 1: K-Nearest Neighbors (KNN), Naive Bayes, and Decision Tree. KNN is a non-generalizing machine learning model since it simply "remembers" all of its train data.


Giuliano Liguori on LinkedIn: #BigData #Analytics #DataScience

#artificialintelligence

The variable you want to predict is called the dependent variable. The variable you are using to predict the other variable's value is called the independent variable. K-NN is a non-parametric algorithm, which means it does not make any assumption on underlying data. It is also called a lazy learner algorithm because it does not learn from the training set immediately instead it stores the dataset and at the time of classification, it performs an action on the dataset. The Naive Bayes classification algorithm is a probabilistic classifier.


Development and internal validation of a machine-learning-developed model for predicting 1-year mortality after fragility hip fracture - BMC Geriatrics

#artificialintelligence

Fragility hip fracture increases morbidity and mortality in older adult patients, especially within the first year. Identification of patients at high risk of death facilitates modification of associated perioperative factors that can reduce mortality. Various machine learning algorithms have been developed and are widely used in healthcare research, particularly for mortality prediction. This study aimed to develop and internally validate 7 machine learning models to predict 1-year mortality after fragility hip fracture. This retrospective study included patients with fragility hip fractures from a single center (Siriraj Hospital, Bangkok, Thailand) from July 2016 to October 2018. A total of 492 patients were enrolled. They were randomly categorized into a training group (344 cases, 70%) or a testing group (148 cases, 30%). Various machine learning techniques were used: the Gradient Boosting Classifier (GB), Random Forests Classifier (RF), Artificial Neural Network Classifier (ANN), Logistic Regression Classifier (LR), Naive Bayes Classifier (NB), Support Vector Machine Classifier (SVM), and K-Nearest Neighbors Classifier (KNN). All models were internally validated by evaluating their performance and the area under a receiver operating characteristic curve (AUC). For the testing dataset, the accuracies were GB model = 0.93, RF model = 0.95, ANN model = 0.94, LR model = 0.91, NB model = 0.89, SVM model = 0.90, and KNN model = 0.90. All models achieved high AUCs that ranged between 0.81 and 0.99. The RF model also provided a negative predictive value of 0.96, a positive predictive value of 0.93, a specificity of 0.99, and a sensitivity of 0.68. Our machine learning approach facilitated the successful development of an accurate model to predict 1-year mortality after fragility hip fracture. Several machine learning algorithms (eg, Gradient Boosting and Random Forest) had the potential to provide high predictive performance based on the clinical parameters of each patient. The web application is available at www.hipprediction.com . External validation in a larger group of patients or in different hospital settings is warranted to evaluate the clinical utility of this tool. Thai Clinical Trials Registry (22 February 2021; reg. no. TCTR20210222003 ).


Interview resources : ML/Data Science/AI Research Engineer

#artificialintelligence

Interviewing is a grueling process, specially during COVID. I recently interviewed with Microsoft (Data Scientist ll), Amazon (Applied AI Scientist) and Apple (Software Development : Machine…


The Application of Machine Learning Techniques for Predicting Match Results in Team Sport: A Review

Journal of Artificial Intelligence Research

Predicting the results of matches in sport is a challenging and interesting task. In this paper, we review a selection of studies from 1996 to 2019 that used machine learning for predicting match results in team sport. Considering both invasion sports and striking/fielding sports, we discuss commonly applied machine learning algorithms, as well as common approaches related to data and evaluation. Our study considers accuracies that have been achieved across different sports, and explores whether evidence exists to support the notion that outcomes of some sports may be inherently more difficult to predict. We also uncover common themes of future research directions and propose recommendations for future researchers. Although there remains a lack of benchmark datasets (apart from in soccer), and the differences between sports, datasets and features makes between-study comparisons difficult, as we discuss, it is possible to evaluate accuracy performance in other ways. Artificial Neural Networks were commonly applied in early studies, however, our findings suggest that a range of models should instead be compared. Selecting and engineering an appropriate feature set appears to be more important than having a large number of instances. For feature selection, we see potential for greater inter-disciplinary collaboration between sport performance analysis, a sub-discipline of sport science, and machine learning.


A Hybrid Feature Extraction Method for Nepali COVID-19-Related Tweets Classification

#artificialintelligence

COVID-19 is one of the deadliest viruses, which has killed millions of people around the world to this date. The reason for peoples' death is not only linked to its infection but also to peoples' mental states and sentiments triggered by the fear of the virus. People's sentiments, which are predominantly available in the form of posts/tweets on social media, can be interpreted using two kinds of information: syntactical and semantic. Herein, we propose to analyze peoples' sentiment using both kinds of information (syntactical and semantic) on the COVID-19-related twitter dataset available in the Nepali language. For this, we, first, use two widely used text representation methods: TF-IDF and FastText and then combine them to achieve the hybrid features to capture the highly discriminating features. Second, we implement nine widely used machine learning classifiers (Logistic Regression, Support Vector Machine, Naive Bayes, K-Nearest Neighbor, Decision Trees, Random Forest, Extreme Tree classifier, AdaBoost, and Multilayer Perceptron), based on the three feature representation methods: TF-IDF, FastText, and Hybrid. To evaluate our methods, we use a publicly available Nepali-COVID-19 tweets dataset, NepCov19Tweets, which consists of Nepali tweets categorized into three classes (Positive, Negative, and Neutral). The evaluation results on the NepCOV19Tweets show that the hybrid feature extraction method not only outperforms the other two individual feature extraction methods while using nine different machine learning algorithms but also provides excellent performance when compared with the state-of-the-art methods. Natural language processing (NLP) techniques have been developed to assess peoples' sentiments on various topics.


Machine Learning Classification Bootcamp in Python

#artificialintelligence

Apply advanced machine learning models to perform sentiment analysis and classify customer reviews such as Amazon Alexa products reviews Understand the theory and intuition behind several machine learning algorithms such as K-Nearest Neighbors, Support Vector Machines (SVM), Decision Trees, Random Forest, Naive Bayes, and Logistic Regression Implement classification algorithms in Scikit-Learn for K-Nearest Neighbors, Support Vector Machines (SVM), Decision Trees, Random Forest, Naive Bayes, and Logistic Regression Build an e-mail spam classifier using Naive Bayes classification Technique Apply machine learning models to Healthcare applications such as Cancer and Kyphosis diseases classification Develop Models to predict customer behavior towards targeted Facebook Ads Classify data using K-Nearest Neighbors, Support Vector Machines (SVM), Decision Trees, Random Forest, Naive Bayes, and Logistic Regression Build an in-store feature to predict customer's size using their features Develop a fraud detection classifier using Machine Learning Techniques Master Python Seaborn library for statistical plots Understand the difference between Machine Learning, Deep Learning and Artificial Intelligence Perform feature engineering and clean your training and testing data to remove outliers Master Python and Scikit-Learn for Data Science and Machine Learning Learn to use Python Matplotlib library for data Plotting Build an in-store feature to predict customer's size using their features Are you ready to master Machine Learning techniques and Kick-off your career as a Data Scientist?! You came to the right place! Machine Learning skill is one of the top skills to acquire in 2019 with an average salary of over $114,000 in the United States according to PayScale! The total number of ML jobs over the past two years has grown around 600 percent and expected to grow even more by 2020. In this course, we are going to provide students with knowledge of key aspects of state-of-the-art classification techniques.


Inference and FDR Control for Simulated Ising Models in High-dimension

arXiv.org Machine Learning

The (probabilistic) graphical model consists of a collection of probability distributions that factorize according to the structure of an underlying graph [52]. The graphical model captures the complex dependencies among random variables and build large-scale multivariate statistical models, which has been used in many research areas such as hierarchical Bayesian models [27], contingency table analysis [20, 53] in categorical data analysis [1, 23, 37], constraint satisfaction [16, 15], language and speech processing [11, 31], image processing [17, 24, 28] and spatial statistics more generally [8]. In our work, we focus on the undirected graphical models, where the probability distribution factorizes according to the function defined on the cliques of the graph. The undirected graphical models have a variety of applications, including statistical physics [32], natural language processing [38], image analysis [54] and spatial statistics [43]. Specifically, we pay attention to the undirected graphical models which can be described as exponential families, a broad class of probability distributions elaborately studied in many statistical literature [4, 21, 13]. The properties of the exponential families provide some connections between the inference methods and the convex analysis [12, 29]. There are many well-known examples that are undirected graphical models viewed as exponential families, such as Ising model [32, 5], Gaussian MRF [46] and latent Dirichlet allocation [11].


Mental Stress Detection using Data from Wearable and Non-wearable Sensors: A Review

arXiv.org Artificial Intelligence

This paper presents a comprehensive review of methods covering significant subjective and objective human stress detection techniques available in the literature. The methods for measuring human stress responses could include subjective questionnaires (developed by psychologists) and objective markers observed using data from wearable and non-wearable sensors. In particular, wearable sensor-based methods commonly use data from electroencephalography, electrocardiogram, galvanic skin response, electromyography, electrodermal activity, heart rate, heart rate variability, and photoplethysmography both individually and in multimodal fusion strategies. Whereas, methods based on non-wearable sensors include strategies such as analyzing pupil dilation and speech, smartphone data, eye movement, body posture, and thermal imaging. Whenever a stressful situation is encountered by an individual, physiological, physical, or behavioral changes are induced which help in coping with the challenge at hand. A wide range of studies has attempted to establish a relationship between these stressful situations and the response of human beings by using different kinds of psychological, physiological, physical, and behavioral measures. Inspired by the lack of availability of a definitive verdict about the relationship of human stress with these different kinds of markers, a detailed survey about human stress detection methods is conducted in this paper. In particular, we explore how stress detection methods can benefit from artificial intelligence utilizing relevant data from various sources. This review will prove to be a reference document that would provide guidelines for future research enabling effective detection of human stress conditions.


Identifying Self-Admitted Technical Debt in Issue Tracking Systems using Machine Learning

arXiv.org Artificial Intelligence

Technical debt is a metaphor indicating sub-optimal solutions implemented for short-term benefits by sacrificing the long-term maintainability and evolvability of software. A special type of technical debt is explicitly admitted by software engineers (e.g. using a TODO comment); this is called Self-Admitted Technical Debt or SATD. Most work on automatically identifying SATD focuses on source code comments. In addition to source code comments, issue tracking systems have shown to be another rich source of SATD, but there are no approaches specifically for automatically identifying SATD in issues. In this paper, we first create a training dataset by collecting and manually analyzing 4,200 issues (that break down to 23,180 sections of issues) from seven open-source projects (i.e., Camel, Chromium, Gerrit, Hadoop, HBase, Impala, and Thrift) using two popular issue tracking systems (i.e., Jira and Google Monorail). We then propose and optimize an approach for automatically identifying SATD in issue tracking systems using machine learning. Our findings indicate that: 1) our approach outperforms baseline approaches by a wide margin with regard to the F1-score; 2) transferring knowledge from suitable datasets can improve the predictive performance of our approach; 3) extracted SATD keywords are intuitive and potentially indicating types and indicators of SATD; 4) projects using different issue tracking systems have less common SATD keywords compared to projects using the same issue tracking system; 5) a small amount of training data is needed to achieve good accuracy.