Nearest Neighbor Methods
Emotion Recognition from Speech based on Relevant Feature and Majority Voting
Sarker, Md. Kamruzzaman, Alam, Kazi Md. Rokibul, Arifuzzaman, Md.
This paper proposes an approach to detect emotion from human speech employing majority voting technique over several machine learning techniques. The contribution of this work is in two folds: firstly it selects those features of speech which is most promising for classification and secondly it uses the majority voting technique that selects the exact class of emotion. Here, majority voting technique has been applied over Neural Network (NN), Decision Tree (DT), Support Vector Machine (SVM) and K-Nearest Neighbor (KNN). Input vector of NN, DT, SVM and KNN consists of various acoustic and prosodic features like Pitch, Mel-Frequency Cepstral coefficients etc. From speech signal many feature have been extracted and only promising features have been selected. To consider a feature as promising, Fast Correlation based feature selection (FCBF) and Fisher score algorithms have been used and only those features are selected which are highly ranked by both of them. The proposed approach has been tested on Berlin dataset of emotional speech [3] and Electromagnetic Articulography (EMA) dataset [4]. The experimental result shows that majority voting technique attains better accuracy over individual machine learning techniques. The employment of the proposed approach can effectively recognize the emotion of human beings in case of social robot, intelligent chat client, call-center of a company etc.
Towards Non-Parametric Learning to Rank
Liu, Ao, Wu, Qiong, Liu, Zhenming, Xia, Lirong
This paper studies a stylized, yet natural, learning-to-rank problem and points out the critical incorrectness of a widely used nearest neighbor algorithm. We consider a model with $n$ agents (users) $\{x_i\}_{i \in [n]}$ and $m$ alternatives (items) $\{y_j\}_{j \in [m]}$, each of which is associated with a latent feature vector. Agents rank items nondeterministically according to the Plackett-Luce model, where the higher the utility of an item to the agent, the more likely this item will be ranked high by the agent. Our goal is to find neighbors of an arbitrary agent or alternative in the latent space. We first show that the Kendall-tau distance based kNN produces incorrect results in our model. Next, we fix the problem by introducing a new algorithm with features constructed from "global information" of the data matrix. Our approach is in sharp contrast to most existing feature engineering methods. Finally, we design another new algorithm identifying similar alternatives. The construction of alternative features can be done using "local information," highlighting the algorithmic difference between finding similar agents and similar alternatives.
An Unsupervised Learning Classifier with Competitive Error Performance
An unsupervised learning classification model is described. It achieves classification error probability competitive with that of popular supervised learning classifiers such as SVM or kNN. The model is based on the incremental execution of small step shift and rotation operations upon selected discriminative hyperplanes at the arrival of input samples. When applied, in conjunction with a selected feature extractor, to a subset of the ImageNet dataset benchmark, it yields 6.2 % Top 3 probability of error; this exceeds by merely about 2 % the result achieved by (supervised) k-Nearest Neighbor, both using same feature extractor. This result may also be contrasted with popular unsupervised learning schemes such as k-Means which is shown to be practically useless on same dataset.
Breast Cancer Diagnosis via Classification Algorithms
In this paper, we analyze the Wisconsin Diagnostic Breast Cancer Data using Machine Learning classification techniques, such as the SVM, Bayesian Logistic Regression (Variational Approximation), and K-Nearest-Neighbors. We describe each model, and compare their performance through different measures. We conclude that SVM has the best performance among all other classifiers, while it competes closely with the Bayesian Logistic Regression that is ranked second best method for this dataset.
Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate
Belkin, Mikhail, Hsu, Daniel, Mitra, Partha
Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phenomenon of strong generalization performance for "overfitted" / interpolated classifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise. This paper takes a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and weighted $k$-nearest neighbor schemes. Consistency or near-consistency is proved for these schemes in classification and regression problems. These schemes have an inductive bias that benefits from higher dimension, a kind of "blessing of dimensionality". Finally, connections to kernel machines, random forests, and adversarial examples in the interpolated regime are discussed.
Employee Attrition Prediction
Yedida, Rahul, Reddy, Rahul, Vahi, Rakshit, Jana, Rahul, GV, Abhilash, Kulkarni, Deepti
We aim to predict whether an employee of a company will leave or not, using the k-Nearest Neighbors algorithm. We use evaluation of employee performance, average monthly hours at work and number of years spent in the company, among others, as our features. Other approaches to this problem include the use of ANNs, decision trees and logistic regression. The dataset was split, using 70% for training the algorithm and 30% for testing it, achieving an accuracy of 94.32%.
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
Wang, Yizhen, Jha, Somesh, Chaudhuri, Kamalika
Motivated by safety-critical applications, test-time attacks on classifiers via adversarial examples has recently received a great deal of attention. However, there is a general lack of understanding on why adversarial examples arise; whether they originate due to inherent properties of data or due to lack of training samples remains ill-understood. In this work, we introduce a theoretical framework analogous to bias-variance theory for understanding these effects. We use our framework to analyze the robustness of a canonical non-parametric classifier - the k-nearest neighbors. Our analysis shows that its robustness properties depend critically on the value of k - the classifier may be inherently non-robust for small k, but its robustness approaches that of the Bayes Optimal classifier for fast-growing k. We propose a novel modified 1-nearest neighbor classifier, and guarantee its robustness in the large sample limit. Our experiments suggest that this classifier may have good robustness properties even for reasonable data set sizes.
Neural Message Passing with Edge Updates for Predicting Properties of Molecules and Materials
Jรธrgensen, Peter Bjรธrn, Jacobsen, Karsten Wedel, Schmidt, Mikkel N.
Neural message passing on molecular graphs is one of the most promising methods for predicting formation energy and other properties of molecules and materials. In this work we extend the neural message passing model with an edge update network which allows the information exchanged between atoms to depend on the hidden state of the receiving atom. We benchmark the proposed model on three publicly available datasets (QM9, The Materials Project and OQMD) and show that the proposed model yields superior prediction of formation energies and other properties on all three datasets in comparison with the best published results. Furthermore we investigate different methods for constructing the graph used to represent crystalline structures and we find that using a graph based on K-nearest neighbors achieves better prediction accuracy than using maximum distance cutoff or the Voronoi tessellation graph.
Hedging with Machine Learning
There are many ways to reduce trading risk through hedging. Funds typically use futures and options to hedge each trade. Similar to insurance, this safety net comes at a price. Using an AI-based strategy, though, there is a way to protect a position at a much lower cost. Before McDonald's could introduce Chicken McNuggets, they had to hedge against the cost of chicken.
Machine Learning Classification Algorithms using MATLAB
This course is for you If you are being fascinated by the field of Machine Learning? This course is designed to cover one of the most interesting areas of machine learning called classification. I will take you step-by-step in this course and will first cover the basics of MATLAB. Following that we will look into the details of how to use different machine learning algorithms using MATLAB. Specifically, we will be looking at the MATLAB toolbox called statistic and machine learning toolbox.We will implement some of the most commonly used classification algorithms such as K-Nearest Neighbor, Naive Bayes, Discriminant Analysis, Decision Tress, Support Vector Machines, Error Correcting Ouput Codes and Ensembles.