Goto

Collaborating Authors

 Nearest Neighbor Methods


AI Anyone Can Understand: Part 11 -- K-Nearest Neighbors Algorithm

#artificialintelligence

Imagine you want to know what kind of toy a new toy is, but you don't know what it is. You could ask your friends who have a lot of toys what they think it is. You would pick the toys that look most like the new toy and ask your friends what they think it is. Whatever most of your friends say the toy is, that's probably what it is. That's like how k-nearest neighbors works, it looks at the things that are most similar to the thing you want to know about and figures out what it is.


Segmenting thalamic nuclei from manifold projections of multi-contrast MRI

arXiv.org Artificial Intelligence

The thalamus is a subcortical gray matter structure that plays a key role in relaying sensory and motor signals within the brain. Its nuclei can atrophy or otherwise be affected by neurological disease and injuries including mild traumatic brain injury. Segmenting both the thalamus and its nuclei is challenging because of the relatively low contrast within and around the thalamus in conventional magnetic resonance (MR) images. This paper explores imaging features to determine key tissue signatures that naturally cluster, from which we can parcellate thalamic nuclei. Tissue contrasts include T1-weighted and T2-weighted images, MR diffusion measurements including FA, mean diffusivity, Knutsson coefficients that represent fiber orientation, and synthetic multi-TI images derived from FGATIR and T1-weighted images. After registration of these contrasts and isolation of the thalamus, we use the uniform manifold approximation and projection (UMAP) method for dimensionality reduction to produce a low-dimensional representation of the data within the thalamus. Manual labeling of the thalamus provides labels for our UMAP embedding from which k nearest neighbors can be used to label new unseen voxels in that same UMAP embedding. N -fold cross-validation of the method reveals comparable performance to state-of-the-art methods for thalamic parcellation.


Predicting Students' Exam Scores Using Physiological Signals

arXiv.org Artificial Intelligence

While acute stress has been shown to have both positive and negative effects on performance, not much is known about the impacts of stress on students grades during examinations. To answer this question, we examined whether a correlation could be found between physiological stress signals and exam performance. We conducted this study using multiple physiological signals of ten undergraduate students over three different exams. The study focused on three signals, i.e., skin temperature, heart rate, and electrodermal activity. We extracted statistics as features and fed them into a variety of binary classifiers to predict relatively higher or lower grades. Experimental results showed up to 0.81 ROC-AUC with k-nearest neighbor algorithm among various machine learning algorithms.


Machine Learning Approach and Extreme Value Theory to Correlated Stochastic Time Series with Application to Tree Ring Data

arXiv.org Artificial Intelligence

The main goal of machine learning (ML) is to study and improve mathematical models which can be trained with data provided by the environment to infer the future and to make decisions without necessarily having complete knowledge of all influencing elements. In this work, we describe how ML can be a powerful tool in studying climate modeling. Tree ring growth was used as an implementation in different aspects, for example, studying the history of buildings and environment. By growing and via the time, a new layer of wood to beneath its bark by the tree. After years of growing, time series can be applied via a sequence of tree ring widths. The purpose of this paper is to use ML algorithms and Extreme Value Theory in order to analyse a set of tree ring widths data from nine trees growing in Nottinghamshire. Initially, we start by exploring the data through a variety of descriptive statistical approaches. Transforming data is important at this stage to find out any problem in modelling algorithm. We then use algorithm tuning and ensemble methods to improve the k-nearest neighbors (KNN) algorithm. A comparison between the developed method in this study ad other methods are applied. Also, extreme value of the dataset will be more investigated. The results of the analysis study show that the ML algorithms in the Random Forest method would give accurate results in the analysis of tree ring widths data from nine trees growing in Nottinghamshire with the lowest Root Mean Square Error value. Also, we notice that as the assumed ARMA model parameters increased, the probability of selecting the true model also increased. In terms of the Extreme Value Theory, the Weibull distribution would be a good choice to model tree ring data.


Why do Nearest Neighbor Language Models Work?

arXiv.org Artificial Intelligence

Language models (LMs) compute the probability of a text by sequentially computing a representation of an already-seen context and using this representation to predict the next word. Currently, most LMs calculate these representations through a neural network consuming the immediate previous context. However recently, retrieval-augmented LMs have shown to improve over standard neural LMs, by accessing information retrieved from a large datastore, in addition to their standard, parametric, next-word prediction. In this paper, we set out to understand why retrieval-augmented language models, and specifically why k-nearest neighbor language models (kNN-LMs) perform better than standard parametric LMs, even when the k-nearest neighbor component retrieves examples from the same training set that the LM was originally trained on. To this end, we perform a careful analysis of the various dimensions over which kNN-LM diverges from standard LMs, and investigate these dimensions one by one. Empirically, we identify three main reasons why kNN-LM performs better than standard LMs: using a different input representation for predicting the next tokens, approximate kNN search, and the importance of softmax temperature for the kNN distribution. Further, we incorporate these insights into the model architecture or the training procedure of the standard parametric LM, improving its results without the need for an explicit retrieval component.


Using External Off-Policy Speech-To-Text Mappings in Contextual End-To-End Automated Speech Recognition

arXiv.org Artificial Intelligence

Here are the key highlights of our approach: first, we speech encoders, for downstream applications that often generate a key-value external knowledge store that maps an audio (a) have fewer labeled training examples, and (b) rapidly evolving representation of each text element of the catalog (usually distributions of speech data. The traditional approach to consisting of 1M-10M examples) to a semantic representation this problem is to frequently collect fresh data, which can be of the text. Next, we train a model that leverages this external used to re-train and specialize models, leveraging tools such store by attending over retrieved key/value pairs, which we as domain-prompts [1], incremental-learning [2], knowledge retrieve through approximate k-nearest neighbors. Relying on distillation [3], hand-written grammars [4], or metric learning an external, constant, and off-policy key-value store means [5, 6] to reduce the impact of re-training the model for the that this store can be updated during specialization, requiring downstream application. Unfortunately, for data that changes only an updated list of phrases for each new model instead of on a rapid basis, such as product listings or applications requiring additional fine-tuning.


IAN: Iterated Adaptive Neighborhoods for manifold learning and dimensionality estimation

arXiv.org Artificial Intelligence

Invoking the manifold assumption in machine learning requires knowledge of the manifold's geometry and dimension, and theory dictates how many samples are required. However, in applications data are limited, sampling may not be uniform, and manifold properties are unknown and (possibly) non-pure; this implies that neighborhoods must adapt to the local structure. We introduce an algorithm for inferring adaptive neighborhoods for data given by a similarity kernel. Starting with a locally-conservative neighborhood (Gabriel) graph, we sparsify it iteratively according to a weighted counterpart. In each step, a linear program yields minimal neighborhoods globally and a volumetric statistic reveals neighbor outliers likely to violate manifold geometry. We apply our adaptive neighborhoods to non-linear dimensionality reduction, geodesic computation and dimension estimation. A comparison against standard algorithms using, e.g., k-nearest neighbors, demonstrates their usefulness. Code for our algorithm will be available at https://github.com/dyballa/IAN


On Error and Compression Rates for Prototype Rules

arXiv.org Artificial Intelligence

We study the close interplay between error and compression in the non-parametric multiclass classification setting in terms of prototype learning rules. We focus in particular on a recently proposed compression-based learning rule termed OptiNet (Kontorovich, Sabato, and Urner 2016; Kontorovich, Sabato, and Weiss 2017; Hanneke et al. 2021). Beyond its computational merits, this rule has been recently shown to be universally consistent in any metric instance space that admits a universally consistent rule--the first learning algorithm known to enjoy this property. However, its error and compression rates have been left open. Here we derive such rates in the case where instances reside in Euclidean space under commonly posed smoothness and tail conditions on the data distribution. We first show that OptiNet achieves non-trivial compression rates while enjoying near minimax-optimal error rates. We then proceed to study a novel general compression scheme for further compressing prototype rules that locally adapts to the noise level without sacrificing accuracy. Applying it to OptiNet, we show that under a geometric margin condition, further gain in the compression rate is achieved. Experimental results comparing the performance of the various methods are presented.


An efficient hybrid classification approach for COVID-19 based on Harris Hawks Optimization and Salp Swarm Optimization

arXiv.org Artificial Intelligence

Feature selection can be defined as one of the pre-processing steps that decrease the dimensionality of a dataset by identifying the most significant attributes while also boosting the accuracy of classification. For solving feature selection problems, this study presents a hybrid binary version of the Harris Hawks Optimization algorithm (HHO) and Salp Swarm Optimization (SSA) (HHOSSA) for Covid-19 classification. The proposed (HHOSSA) presents a strategy for improving the basic HHO's performance using the Salp algorithm's power to select the best fitness values. The HHOSSA was tested against two well-known optimization algorithms, the Whale Optimization Algorithm (WOA) and the Grey wolf optimizer (GWO), utilizing a total of 800 chest X-ray images. A total of four performance metrics (Accuracy, Recall, Precision, F1) were employed in the studies using three classifiers (Support vector machines (SVMs), k-Nearest Neighbor (KNN), and Extreme Gradient Boosting (XGBoost)). The proposed algorithm (HHOSSA) achieved 96% accuracy with the SVM classifier, and 98% accuracy with two classifiers, XGboost and KNN.


Distributional regression and its evaluation with the CRPS: Bounds and convergence of the minimax risk

arXiv.org Machine Learning

The theoretical advances on the properties of scoring rules over the past decades have broadened the use of scoring rules in probabilistic forecasting. In meteorological forecasting, statistical postprocessing techniques are essential to improve the forecasts made by deterministic physical models. Numerous state-of-the-art statistical postprocessing techniques are based on distributional regression evaluated with the Continuous Ranked Probability Score (CRPS). However, theoretical properties of such evaluation with the CRPS have solely considered the unconditional framework (i.e. without covariates) and infinite sample sizes. We extend these results and study the rate of convergence in terms of CRPS of distributional regression methods. We find the optimal minimax rate of convergence for a given class of distributions and show that the k-nearest neighbor method and the kernel method reach this optimal minimax rate.