Goto

Collaborating Authors

 Nearest Neighbor Methods


Certified data-driven physics-informed greedy auto-encoder simulator

arXiv.org Artificial Intelligence

A parametric adaptive greedy Latent Space Dynamics Identification (gLaSDI) framework is developed for accurate, efficient, and certified data-driven physics-informed greedy auto-encoder simulators of high-dimensional nonlinear dynamical systems. In the proposed framework, an auto-encoder and dynamics identification models are trained interactively to discover intrinsic and simple latent-space dynamics. To effectively explore the parameter space for optimal model performance, an adaptive greedy sampling algorithm integrated with a physics-informed error indicator is introduced to search for optimal training samples on the fly, outperforming the conventional predefined uniform sampling. Further, an efficient k-nearest neighbor convex interpolation scheme is employed to exploit local latent-space dynamics for improved predictability. Numerical results demonstrate that the proposed method achieves 121 to 2,658x speed-up with 1 to 5% relative errors for radial advection and 2D Burgers dynamical problems.


An Optimal k Nearest Neighbours Ensemble for Classification Based on Extended Neighbourhood Rule with Features subspace

arXiv.org Artificial Intelligence

To minimize the effect of outliers, kNN ensembles identify a set of closest observations to a new sample point to estimate its unknown class by using majority voting in the labels of the training instances in the neighbourhood. Ordinary kNN based procedures determine k closest training observations in the neighbourhood region (enclosed by a sphere) by using a distance formula. The k nearest neighbours procedure may not work in a situation where sample points in the test data follow the pattern of the nearest observations that lie on a certain path not contained in the given sphere of nearest neighbours. Furthermore, these methods combine hundreds of base kNN learners and many of them might have high classification errors thereby resulting in poor ensembles. To overcome these problems, an optimal extended neighbourhood rule based ensemble is proposed where the neighbours are determined in k steps. It starts from the first nearest sample point to the unseen observation. The second nearest data point is identified that is closest to the previously selected data point. This process is continued until the required number of the k observations are obtained. Each base model in the ensemble is constructed on a bootstrap sample in conjunction with a random subset of features. After building a sufficiently large number of base models, the optimal models are then selected based on their performance on out-of-bag (OOB) data.


Look back, look around: a systematic analysis of effective predictors for new outlinks in focused Web crawling

arXiv.org Artificial Intelligence

Small and medium enterprises rely on detailed Web analytics to be informed about their market and competition. Focused crawlers meet this demand by crawling and indexing specific parts of the Web. Critically, a focused crawler must quickly find new pages that have not yet been indexed. Since a new page can be discovered only by following a new outlink, predicting new outlinks is very relevant in practice. In the literature, many feature designs have been proposed for predicting changes in the Web. In this work we provide a structured analysis of this problem, using new outlinks as our running prediction target. Specifically, we unify earlier feature designs in a taxonomic arrangement of features along two dimensions: static versus dynamic features, and features of a page versus features of the network around it. Within this taxonomy, complemented by our new (mainly, dynamic network) features, we identify best predictors for new outlinks. Our main conclusion is that most informative features are the recent history of new outlinks on a page itself, and of its content-related pages. Hence, we propose a new 'look back, look around' (LBLA) model, that uses only these features. With the obtained predictions, we design a number of scoring functions to guide a focused crawler to pages with most new outlinks, and compare their performance. The LBLA approach proved extremely effective, outperforming other models including those that use a most complete set of features. One of the learners we use, is the recent NGBoost method that assumes a Poisson distribution for the number of new outlinks on a page, and learns its parameters. This connects the two so far unrelated avenues in the literature: predictions based on features of a page, and those based on probabilistic modelling. All experiments were carried out on an original dataset, made available by a commercial focused crawler.


Application of Explainable Machine Learning in Detecting and Classifying Ransomware Families Based on API Call Analysis

arXiv.org Artificial Intelligence

Ransomware has appeared as one of the major global threats in recent days. The alarming increasing rate of ransomware attacks and new ransomware variants intrigue the researchers to constantly examine the distinguishing traits of ransomware and refine their detection strategies. Application Programming Interface (API) is a way for one program to collaborate with another; API calls are the medium by which they communicate. Ransomware uses this strategy to interact with the OS and makes a significantly higher number of calls in different sequences to ask for taking action. This research work utilizes the frequencies of different API calls to detect and classify ransomware families. First, a Web-Crawler is developed to automate collecting the Windows Portable Executable (PE) files of 15 different ransomware families. By extracting different frequencies of 68 API calls, we develop our dataset in the first phase of the two-phase feature engineering process. After selecting the most significant features in the second phase of the feature engineering process, we deploy six Supervised Machine Learning models: Na"ive Bayes, Logistic Regression, Random Forest, Stochastic Gradient Descent, K-Nearest Neighbor, and Support Vector Machine. Then, the performances of all the classifiers are compared to select the best model. The results reveal that Logistic Regression can efficiently classify ransomware into their corresponding families securing 99.15% overall accuracy. Finally, instead of relying on the 'Black box' characteristic of the Machine Learning models, we present the post-hoc analysis of our best-performing model using 'SHapley Additive exPlanations' or SHAP values to ascertain the transparency and trustworthiness of the model's prediction.


CS-Shapley: Class-wise Shapley Values for Data Valuation in Classification

arXiv.org Artificial Intelligence

Data valuation, or the valuation of individual datum contributions, has seen growing interest in machine learning due to its demonstrable efficacy for tasks such as noisy label detection. In particular, due to the desirable axiomatic properties, several Shapley value approximation methods have been proposed. In these methods, the value function is typically defined as the predictive accuracy over the entire development set. However, this limits the ability to differentiate between training instances that are helpful or harmful to their own classes. Intuitively, instances that harm their own classes may be noisy or mislabeled and should receive a lower valuation than helpful instances. In this work, we propose CS-Shapley, a Shapley value with a new value function that discriminates between training instances' in-class and out-of-class contributions. Our theoretical analysis shows the proposed value function is (essentially) the unique function that satisfies two desirable properties for evaluating data values in classification. Further, our experiments on two benchmark evaluation tasks (data removal and noisy label detection) and four classifiers demonstrate the effectiveness of CS-Shapley over existing methods. Lastly, we evaluate the "transferability" of data values estimated from one classifier to others, and our results suggest Shapley-based data valuation is transferable for application across different models.


Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning

arXiv.org Artificial Intelligence

Exploration is critical for deep reinforcement learning in complex environments with high-dimensional observations and sparse rewards. To address this problem, recent approaches proposed to leverage intrinsic rewards to improve exploration, such as novelty-based exploration and prediction-based exploration. However, many intrinsic reward modules require sophisticated structures and representation learning, resulting in prohibitive computational complexity and unstable performance. In this paper, we propose Rewarding Episodic Visitation Discrepancy (REVD), a computation-efficient and quantified exploration method. More specifically, REVD provides intrinsic rewards by evaluating the R\'enyi divergence-based visitation discrepancy between episodes. To make efficient divergence estimation, a k-nearest neighbor estimator is utilized with a randomly-initialized state encoder. Finally, the REVD is tested on Atari games and PyBullet Robotics Environments. Extensive experiments demonstrate that REVD can significantly improves the sample efficiency of reinforcement learning algorithms and outperforms the benchmarking methods.


Hyperbolic Centroid Calculations for Text Classification

arXiv.org Artificial Intelligence

A new development in NLP is the construction of hyperbolic word embeddings. As opposed to their Euclidean counterparts, hyperbolic embeddings are represented not by vectors, but by points in hyperbolic space. This makes the most common basic scheme for constructing document representations, namely the averaging of word vectors, meaningless in the hyperbolic setting. We reinterpret the vector mean as the centroid of the points represented by the vectors, and investigate various hyperbolic centroid schemes and their effectiveness at text classification.


Improving the Predictive Performances of $k$ Nearest Neighbors Learning by Efficient Variable Selection

arXiv.org Artificial Intelligence

Variable selection methods play a critical role in machine learning, also known as feature selection. Variable selection helps avoid the curse of dimensionality, improving the prediction performance, shorter the training time or saving the computer resources, and so on.


Non-Parametric Domain Adaptation for End-to-End Speech Translation

arXiv.org Artificial Intelligence

End-to-End Speech Translation (E2E-ST) has received increasing attention due to the potential of its less error propagation, lower latency, and fewer parameters. However, the effectiveness of neural-based approaches to this task is severely limited by the available training corpus, especially for domain adaptation where in-domain triplet training data is scarce or nonexistent. In this paper, we propose a novel non-parametric method that leverages domain-specific text translation corpus to achieve domain adaptation for the E2E-ST system. To this end, we first incorporate an additional encoder into the pre-trained E2E-ST model to realize text translation modelling, and then unify the decoder's output representation for text and speech translation tasks by reducing the correspondent representation mismatch in available triplet training data. During domain adaptation, a k-nearest-neighbor (kNN) classifier is introduced to produce the final translation distribution using the external datastore built by the domain-specific text translation corpus, while the universal output representation is adopted to perform a similarity search. Experiments on the Europarl-ST benchmark demonstrate that when in-domain text translation data is involved only, our proposed approach significantly improves baseline by 12.82 BLEU on average in all translation directions, even outperforming the strong in-domain fine-tuning method.


Applications of K Nearest Neighbor algorithm part1(Artificial Intelligence)

#artificialintelligence

Abstract: Demands for minimum parameter setup in machine learning models are desirable to avoid time-consuming optimization processes. The k-Nearest Neighbors is one of the most effective and straightforward models employed in numerous problems. Despite its well-known performance, it requires the value of k for specific data distribution, thus demanding expensive computational efforts. This paper proposes a k-Nearest Neighbors classifier that bypasses the need to define the value of k. The model computes the k value adaptively considering the data distribution of the training set.