Nearest Neighbor Methods
Nowcasting Madagascar's real GDP using machine learning algorithms
Ramaharo, Franck, Rasolofomanana, Gerzhino
We investigate the predictive power of different machine learning algorithms to nowcast Madagascar's gross domestic product (GDP). We trained popular regression models, including linear regularized regression (Ridge, Lasso, Elastic-net), dimensionality reduction model (principal component regression), k-nearest neighbors algorithm (k-NN regression), support vector regression (linear SVR), and tree-based ensemble models (Random forest and XGBoost regressions), on 10 Malagasy quarterly macroeconomic leading indicators over the period 2007Q1--2022Q4, and we used simple econometric models as a benchmark. We measured the nowcast accuracy of each model by calculating the root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). Our findings reveal that the Ensemble Model, formed by aggregating individual predictions, consistently outperforms traditional econometric models. We conclude that machine learning models can deliver more accurate and timely nowcasts of Malagasy economic performance and provide policymakers with additional guidance for data-driven decision making.
Data Classification With Multiprocessing
Dixit, Anuja, Byreddy, Shreya, Song, Guanqun, Zhu, Ting
Classification is one of the most important tasks in Machine Learning (ML) and with recent advancements in artificial intelligence (AI) it is important to find efficient ways to implement it. Generally, the choice of classification algorithm depends on the data it is dealing with, and accuracy of the algorithm depends on the hyperparameters it is tuned with. One way is to check the accuracy of the algorithms by executing it with different hyperparameters serially and then selecting the parameters that give the highest accuracy to predict the final output. This paper proposes another way where the algorithm is parallelly trained with different hyperparameters to reduce the execution time. In the end, results from all the trained variations of the algorithms are ensembled to exploit the parallelism and improve the accuracy of prediction. Python multiprocessing is used to test this hypothesis with different classification algorithms such as K-Nearest Neighbors (KNN), Support Vector Machines (SVM), random forest and decision tree and reviews factors affecting parallelism. Ensembled output considers the predictions from all processes and final class is the one predicted by maximum number of processes. Doing this increases the reliability of predictions. We conclude that ensembling improves accuracy and multiprocessing reduces execution time for selected algorithms.
Sign Language Conversation Interpretation Using Wearable Sensors and Machine Learning
Kalandar, Basma, Dworakowski, Ziemowit
The count of people suffering from various levels of hearing loss reached 1.57 billion in 2019. This huge number tends to suffer on many personal and professional levels and strictly needs to be included with the rest of society healthily. This paper presents a proof of concept of an automatic sign language recognition system based on data obtained using a wearable device of 3 flex sensors. The system is designed to interpret a selected set of American Sign Language (ASL) dynamic words by collecting data in sequences of the performed signs and using machine learning methods. The built models achieved high-quality performances, such as Random Forest with 99% accuracy, Support Vector Machine (SVM) with 99%, and two K-Nearest Neighbor (KNN) models with 98%. This indicates many possible paths toward the development of a full-scale system.
Density Descent for Diversity Optimization
Lee, David H., Palaparthi, Anishalakshmi V., Fontaine, Matthew C., Tjanaka, Bryon, Nikolaidis, Stefanos
Diversity optimization seeks to discover a set of solutions that elicit diverse features. Prior work has proposed Novelty Search (NS), which, given a current set of solutions, seeks to expand the set by finding points in areas of low density in the feature space. However, to estimate density, NS relies on a heuristic that considers the k-nearest neighbors of the search point in the feature space, which yields a weaker stability guarantee. We propose Density Descent Search (DDS), an algorithm that explores the feature space via gradient descent on a continuous density estimate of the feature space that also provides stronger stability guarantee. We experiment with DDS and two density estimation methods: kernel density estimation (KDE) and continuous normalizing flow (CNF). On several standard diversity optimization benchmarks, DDS outperforms NS, the recently proposed MAP-Annealing algorithm, and other state-of-the-art baselines. Additionally, we prove that DDS with KDE provides stronger stability guarantees than NS, making it more suitable for adaptive optimizers. Furthermore, we prove that NS is a special case of DDS that descends a KDE of the feature space.
Towards Faster k-Nearest-Neighbor Machine Translation
Shi, Xiangyu, Liang, Yunlong, Xu, Jinan, Chen, Yufeng
Recent works have proven the effectiveness of k-nearest-neighbor machine translation(a.k.a kNN-MT) approaches to produce remarkable improvement in cross-domain translations. However, these models suffer from heavy retrieve overhead on the entire datastore when decoding each token. We observe that during the decoding phase, about 67% to 84% of tokens are unvaried after searching over the corpus datastore, which means most of the tokens cause futile retrievals and introduce unnecessary computational costs by initiating k-nearest-neighbor searches. We consider this phenomenon is explainable in linguistics and propose a simple yet effective multi-layer perceptron (MLP) network to predict whether a token should be translated jointly by the neural machine translation model and probabilities produced by the kNN or just by the neural model. The results show that our method succeeds in reducing redundant retrieval operations and significantly reduces the overhead of kNN retrievals by up to 53% at the expense of a slight decline in translation quality. Moreover, our method could work together with all existing kNN-MT systems.
A Robust and Efficient Boundary Point Detection Method by Measuring Local Direction Dispersion
Peng, Dehua, Gui, Zhipeng, Wu, Huayi
Boundary points pose a significant challenge for machine learning tasks, including classification, clustering, and dimensionality reduction. Due to the similarity of features, boundary areas can result in mixed-up classes or clusters, leading to a crowding problem in dimensionality reduction. To address this challenge, numerous boundary point detection methods have been developed, but they are insufficiently to accurately and efficiently identify the boundary points in non-convex structures and high-dimensional manifolds. In this work, we propose a robust and efficient method for detecting boundary points using Local Direction Dispersion (LoDD). LoDD considers that internal points are surrounded by neighboring points in all directions, while neighboring points of a boundary point tend to be distributed only in a certain directional range. LoDD adopts a density-independent K-Nearest Neighbors (KNN) method to determine neighboring points, and defines a statistic-based metric using the eigenvalues of the covariance matrix of KNN coordinates to measure the centrality of a query point. We demonstrated the validity of LoDD on five synthetic datasets (2-D and 3-D) and ten real-world benchmarks, and tested its clustering performance by equipping with two typical clustering methods, K-means and Ncut. Our results show that LoDD achieves promising and robust detection accuracy in a time-efficient manner.
PEFA: Parameter-Free Adapters for Large-scale Embedding-based Retrieval Models
Chang, Wei-Cheng, Jiang, Jyun-Yu, Zhang, Jiong, Al-Darabsah, Mutasem, Teo, Choon Hui, Hsieh, Cho-Jui, Yu, Hsiang-Fu, Vishwanathan, S. V. N.
Embedding-based Retrieval Models (ERMs) have emerged as a promising framework for large-scale text retrieval problems due to powerful large language models. Nevertheless, fine-tuning ERMs to reach state-of-the-art results can be expensive due to the extreme scale of data as well as the complexity of multi-stages pipelines (e.g., pre-training, fine-tuning, distillation). In this work, we propose the PEFA framework, namely ParamEter-Free Adapters, for fast tuning of ERMs without any backward pass in the optimization. At index building stage, PEFA equips the ERM with a non-parametric k-nearest neighbor (kNN) component. At inference stage, PEFA performs a convex combination of two scoring functions, one from the ERM and the other from the kNN. Based on the neighborhood definition, PEFA framework induces two realizations, namely PEFA-XL (i.e., extra large) using double ANN indices and PEFA-XS (i.e., extra small) using a single ANN index. Empirically, PEFA achieves significant improvement on two retrieval applications. For document retrieval, regarding Recall@100 metric, PEFA improves not only pre-trained ERMs on Trivia-QA by an average of 13.2%, but also fine-tuned ERMs on NQ-320K by an average of 5.5%, respectively. For product search, PEFA improves the Recall@100 of the fine-tuned ERMs by an average of 5.3% and 14.5%, for PEFA-XS and PEFA-XL, respectively. Our code is available at https://github.com/amzn/pecos/tree/mainline/examples/pefa-wsdm24.
Information Modified K-Nearest Neighbor
Vahedifar, Mohammad Ali, Akhtarshenas, Azim, Sabbaghian, Mariam, Rafatpanah, Mohammad
In this research paper, we introduce a novel classification method aimed at improving the performance of the K-Nearest Neighbors (KNN) algorithm. Our approach leverages Mutual Information (MI) to enhance the significance of weights and draw inspiration from Shapley values, a concept originating from cooperative game theory, to refine value allocation. The fundamental concept underlying KNN is the classification of samples based on the majority thorough their k-nearest neighbors. While both the distances and labels of these neighbors are crucial, traditional KNN assigns equal weight to all samples and prevance considering the varying importance of each neighbor based on their distances and labels. In the proposed method, known as Information-Modified KNN (IMKNN), we address this issue by introducing a straightforward algorithm. To evaluate the effectiveness of our approach, it is compared with 7 contemporary variants of KNN, as well as the traditional KNN. Each of these variants exhibits its unique advantages and limitations. We conduct experiments on 12 widely-used datasets, assessing the methods' performance in terms of accuracy, precision and recall. Our study demonstrates that IMKNN consistently outperforms other methods across different datasets and criteria by highlighting its superior performance in various classification tasks. These findings underscore the potential of IMKNN as a valuable tool for enhancing the capabilities of the KNN algorithm in diverse applications.
Classification of Spam URLs Using Machine Learning Approaches
Odeh, Omar Husni, Arram, Anas, Njoum, Murad
The Internet is used by billions of users every day because it offers fast and free communication tools and platforms. Nevertheless, with this significant increase in usage, huge amounts of spam are generated every second, which wastes internet resources and, more importantly, users' time. This study investigates the use of machine learning models to classify URLs as spam or nonspam. We first extract the features from the URL as it has only one feature, and then we compare the performance of several models, including k nearest neighbors, bagging, random forest, logistic regression, and others. Experimental results demonstrate that bagging outperformed other models and achieved the highest accuracy of 98.64%. In addition, bagging outperformed the current state-of-the-art approaches which emphasize its effectiveness in addressing spam-related challenges on the Internet. This suggests that bagging is a promising approach for URL spam classification.
Quantum Kernel t-Distributed Stochastic Neighbor Embedding
Kawase, Yoshiaki, Mitarai, Kosuke, Fujii, Keisuke
Data visualization is important in understanding the characteristics of data that are difficult to see directly. It is used to visualize loss landscapes and optimization trajectories to analyze optimization performance. Popular optimization analysis is performed by visualizing a loss landscape around the reached local or global minimum using principal component analysis. However, this visualization depends on the variational parameters of a quantum circuit rather than quantum states, which makes it difficult to understand the mechanism of optimization process through the property of quantum states. Here, we propose a quantum data visualization method using quantum kernels, which enables us to offer fast and highly accurate visualization of quantum states. In our numerical experiments, we visualize hand-written digits dataset and apply $k$-nearest neighbor algorithm to the low-dimensional data to quantitatively evaluate our proposed method compared with a classical kernel method. As a result, our proposed method achieves comparable accuracy to the state-of-the-art classical kernel method, meaning that the proposed visualization method based on quantum machine learning does not degrade the separability of the input higher dimensional data. Furthermore, we visualize the optimization trajectories of finding the ground states of transverse field Ising model and successfully find the trajectory characteristics. Since quantum states are higher dimensional objects that can only be seen via observables, our visualization method, which inherits the similarity of quantum data, would be useful in understanding the behavior of quantum circuits and algorithms.