Goto

Collaborating Authors

 k-nearest neighbor classifier


Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams

Neural Information Processing Systems

Adversarial examples are a widely studied phenomenon in machine learning models. While most of the attention has been focused on neural networks, other practical models also suffer from this issue. In this work, we propose an algorithm for evaluating the adversarial robustness of $k$-nearest neighbor classification, i.e., finding a minimum-norm adversarial example. Diverging from previous proposals, we propose the first geometric approach by performing a search that expands outwards from a given input point. On a high level, the search radius expands to the nearby higher-order Voronoi cells until we find a cell that classifies differently from the input point. To scale the algorithm to a large $k$, we introduce approximation steps that find perturbation with smaller norm, compared to the baselines, in a variety of datasets. Furthermore, we analyze the structural properties of a dataset where our approach outperforms the competition.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The authors provide finite sample bounds on the excess risk of these classifiers. When taken to the limit these bounds reproduce the known consistency results for this class. However, they are superior in two ways: 1. They apply in the finite case 2. They apply to a broader set of metric spaces The presentation is very clear and the intuition is well described.


Interpretable Event Diagnosis in Water Distribution Networks

Artelt, André, Vrachimis, Stelios G., Eliades, Demetrios G., Kuhl, Ulrike, Hammer, Barbara, Polycarpou, Marios M.

arXiv.org Artificial Intelligence

The increasing penetration of information and communication technologies in the design, monitoring, and control of water systems enables the use of algorithms for detecting and identifying unanticipated events (such as leakages or water contamination) using sensor measurements. However, data-driven methodologies do not always give accurate results and are often not trusted by operators, who may prefer to use their engineering judgment and experience to deal with such events. In this work, we propose a framework for interpretable event diagnosis -- an approach that assists the operators in associating the results of algorithmic event diagnosis methodologies with their own intuition and experience. This is achieved by providing contrasting (i.e., counterfactual) explanations of the results provided by fault diagnosis algorithms; their aim is to improve the understanding of the algorithm's inner workings by the operators, thus enabling them to take a more informed decision by combining the results with their personal experiences. Specifically, we propose counterfactual event fingerprints, a representation of the difference between the current event diagnosis and the closest alternative explanation, which can be presented in a graphical way. The proposed methodology is applied and evaluated on a realistic use case using the L-Town benchmark. Introduction When an event, such as a leakage, occurs in a Water Distribution Network (WDN), this can affect the dynamics of the system by causing changes in the pressures and flows [1]. These changes can be monitored by flow and pressure sensors installed within WDNs. Typically, a limited number of flow sensors are installed at the entrance of District Metered Areas (DMAs) to monitor the overall water inflow in the area [2], while a larger number of pressure sensors (due to reduced capital and installation costs) are installed at certain locations within the DMA to improve leakage detectability [3].


Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams

Neural Information Processing Systems

Adversarial examples are a widely studied phenomenon in machine learning models. While most of the attention has been focused on neural networks, other practical models also suffer from this issue. In this work, we propose an algorithm for evaluating the adversarial robustness of k -nearest neighbor classification, i.e., finding a minimum-norm adversarial example. Diverging from previous proposals, we propose the first geometric approach by performing a search that expands outwards from a given input point. On a high level, the search radius expands to the nearby higher-order Voronoi cells until we find a cell that classifies differently from the input point.


Study on spike-and-wave detection in epileptic signals using t-location-scale distribution and the K-nearest neighbors classifier

Quintero-Rincón, Antonio, Prendes, Jorge, Muro, Valeria, D'Giano, Carlos

arXiv.org Machine Learning

Pattern classification in electroencephalography (EEG) signals is an important problem in biomedical engineering since it enables the detection of brain activity, particularly the early detection of epileptic seizures. In this paper, we propose a k-nearest neighbors classification for epileptic EEG signals based on a t-location-scale statistical representation to detect spike-and-waves. The proposed approach is demonstrated on a real dataset containing both spike-and-wave events and normal brain function signals, where our performance is evaluated in terms of classification accuracy, sensitivity, and specificity.


Neural Net and Traditional Classifiers

Neural Information Processing Systems

Previous work on nets with continuous-valued inputs led to generative procedures to construct convex decision regions with two-layer perceptrons (one hidden layer) and arbitrary decision regions with three-layer perceptrons (two hidden layers). Here we demonstrate that two-layer perceptron classifiers trained with back propagation can form both convex and disjoint decision regions. Such classifiers are robust, train rapidly, and provide good performance with simple decision regions. When complex decision regions are required, however, convergence time can be excessively long and performance is often no better than that of k-nearest neighbor classifiers. Three neural net classifiers are presented that provide more rapid training under such situations. Two use fixed weights in the first one or two layers and are similar to classifiers that estimate probability density functions using histograms. A third "feature map classifier" uses both unsupervised and supervised training. It provides good performance with little supervised training in situations such as speech recognition where much unlabeled training data is available. The architecture of this classifier can be used to implement a neural net k-nearest neighbor classifier.


Applications of K Nearest Neighbor algorithm part1(Artificial Intelligence)

#artificialintelligence

Abstract: Demands for minimum parameter setup in machine learning models are desirable to avoid time-consuming optimization processes. The k-Nearest Neighbors is one of the most effective and straightforward models employed in numerous problems. Despite its well-known performance, it requires the value of k for specific data distribution, thus demanding expensive computational efforts. This paper proposes a k-Nearest Neighbors classifier that bypasses the need to define the value of k. The model computes the k value adaptively considering the data distribution of the training set.


Measuring Classifier Model Performance

#artificialintelligence

This is Day 28 of the #100DaysOfPython challenge. This post will take the work that was done yesterday in the blog post "First Look At Supervised Learning With Classification" and introduce the concept of training/test sets and output a graph for us to interpret the accuracy of the k-nearest neighbors classifier. At this stage, we will need to bring across our initial code from yesterday's post. The above code was introduced previous. From here on out, we want to create a training and test set for our classifier.


Quantitative Finance & Algorithmic Trading in Python

#artificialintelligence

Understand stock market fundamentals Understand the Modern Portfolio Theory Understand stochastic processes and the famous Black-Scholes mode Understand Monte-Carlo simulations Understand Value-at-Risk (VaR) You should have an interest in quantitative finance as well as in mathematics and programming! This course is about the fundamental basics of financial engineering. First of all you will learn about stocks, bonds and other derivatives. The main reason of this course is to get a better understanding of mathematical models concerning the finance in the main. Markowitz-model is the first step.


PCA, LDA, and SVD: Model Tuning Through Feature Reduction for Transportation POI Classification

#artificialintelligence

PCA is a dimension reduction method that takes datasets with a large number of features and reduces them to a few underlying features. The sklearn PCA package performs this process for us. In the snippet of code below we are reducing the 75 features that the initial dataset has into 8 features. This snippet serves to show the optimal number of features for the feature reduction algorithm to fit into. The below snippets will show how to use the Gaussian Naive Bayes, Decision Tree, and the K-Nearest Neighbors Classifiers with the reduced features.