unsupervised machine
q-means: A quantum algorithm for unsupervised machine learning
Quantum information is a promising new paradigm for fast computations that can provide substantial speedups for many algorithms we use today. Among them, quantum machine learning is one of the most exciting applications of quantum computers. In this paper, we introduce q-means, a new quantum algorithm for clustering. It is a quantum version of a robust k-means algorithm, with similar convergence and precision guarantees. We also design a method to pick the initial centroids equivalent to the classical k-means++ method. Our algorithm provides currently an exponential speedup in the number of points of the dataset, compared to the classical k-means algorithm. We also detail the running time of q-means when applied to well-clusterable datasets. We provide a detailed runtime analysis and numerical simulations for specific datasets. Along with the algorithm, the theorems and tools introduced in this paper can be reused for various applications in quantum machine learning.
Reviews: q-means: A quantum algorithm for unsupervised machine learning
Response to rebuttal: I think the author responses were done well. I think they have satisfactorily answered the questions that I had raised. The reason I am torn between a strong accept and an accept is: most of the techniques used in this paper have already appeared before in various quantum algorithms and are well-known in the quantum community. Having said that, I think putting together known techniques in a rigorous fashion and also practically implementing their algorithm on a quantum simulator is interesting, especially to a problem which is practically important. I think the final aspect might be of interest to a classical ML community; to see how quantum can provide polynomial speedups to relevant ML problems using a toolbag of interesting techniques.
Reviews: q-means: A quantum algorithm for unsupervised machine learning
The reviewers are clear that this paper makes important contributions and may help in drawing the attention of the ML community to the advances in quantum computation (both theoretical and through simulations). Even though mst of the quantum computing tools used by the authors are standard in the Quantum literature, putting them together in a rigorous manner for an important ML problem is a valuable contribution. The author should however take the reviewer comments regarding presentation and style very seriously and incorporate them and the explanation in the author feedback in the camera-ready version. Without that there is a chance that the work will be incomprehensible to a significant chunk of the ML audience and the main purpose of submitting such a paper to an ML venue would be defeated.
q-means: A quantum algorithm for unsupervised machine learning
Quantum information is a promising new paradigm for fast computations that can provide substantial speedups for many algorithms we use today. Among them, quantum machine learning is one of the most exciting applications of quantum computers. In this paper, we introduce q-means, a new quantum algorithm for clustering. It is a quantum version of a robust k-means algorithm, with similar convergence and precision guarantees. We also design a method to pick the initial centroids equivalent to the classical k-means method.
Unsupervised Machine Learning Hybrid Approach Integrating Linear Programming in Loss Function: A Robust Optimization Technique
Kiruluta, Andrew, Lemos, Andreas
Since its formal introduction by Dantzig in 1947, LP has been widely applied across various fields, including operations research, economics, and engineering, due to its ability to optimize objectives subject to linear constraints (Dantzig, 1951; Bazaraa et al., 2013). However, traditional LP approaches have certain limitations, particularly in dealing with non-linear, high-dimensional, and dynamic environments where relationships among variables are complex and non-linear (Bertsimas & Tsitsiklis, 1997). By contrast, machine learning (ML) methods, especially deep learning, have demonstrated remarkable success in modeling complex patterns and making predictions based on large datasets (LeCun et al., 2015; Goodfellow et al., 2016). Despite these strengths, ML models often lack the explicit interpretability and rigorous constraint satisfaction that LP offers (Rudin, 2019). This has motivated researchers to explore hybrid approaches that combine the strengths of LP and ML, aiming to develop models that are both interpretable and powerful in their predictive capabilities. This paper proposes a novel hybrid method that integrates LP within the loss function of an unsupervised machine learning model. By embedding LP constraints directly into the ML framework, this approach not only maintains the interpretability and constraint satisfaction of LP but also leverages the flexibility and learning capacity of ML. This integration is particularly beneficial in unsupervised or semi-supervised settings, where traditional LP methods may struggle to provide robust solutions due to the lack of labeled data (Amos & Kolter, 2017).
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Constraint-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.50)
Error Mitigation for TDoA UWB Indoor Localization using Unsupervised Machine Learning
Duong, Phuong Bich, Van Herbruggen, Ben, Broering, Arne, Shahid, Adnan, De Poorter, Eli
Indoor positioning systems based on Ultra-wideband (UWB) technology are gaining recognition for their ability to provide cm-level localization accuracy. However, these systems often encounter challenges caused by dense multi-path fading, leading to positioning errors. To address this issue, in this letter, we propose a novel methodology for unsupervised anchor node selection using deep embedded clustering (DEC). Our approach uses an Auto Encoder (AE) before clustering, thereby better separating UWB features into separable clusters of UWB input signals. We furthermore investigate how to rank these clusters based on their cluster quality, allowing us to remove untrustworthy signals. Experimental results show the efficiency of our proposed method, demonstrating a significant 23.1% reduction in mean absolute error (MAE) compared to without anchor exclusion. Especially in the dense multi-path area, our algorithm achieves even more significant enhancements, reducing the MAE by 26.6% and the 95th percentile error by 49.3% compared to without anchor exclusion.
- North America > United States > California > Alameda County > Oakland (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > Belgium > Flanders > East Flanders > Ghent (0.04)
Identifying factors associated with fast visual field progression in patients with ocular hypertension based on unsupervised machine learning
Huang, Xiaoqin, Poursoroush, Asma, Sun, Jian, Boland, Michael V., Johnson, Chris, Yousefi, Siamak
Purpose: To identify ocular hypertension (OHT) subtypes with different trends of visual field (VF) progression based on unsupervised machine learning and to discover factors associated with fast VF progression. Participants: A total of 3133 eyes of 1568 ocular hypertension treatment study (OHTS) participants with at least five follow-up VF tests were included in the study. Methods: We used a latent class mixed model (LCMM) to identify OHT subtypes using standard automated perimetry (SAP) mean deviation (MD) trajectories. We characterized the subtypes based on demographic, clinical, ocular, and VF factors at the baseline. We then identified factors driving fast VF progression using generalized estimating equation (GEE) and justified findings qualitatively and quantitatively. Results: The LCMM model discovered four clusters (subtypes) of eyes with different trajectories of MD worsening. The number of eyes in clusters were 794 (25%), 1675 (54%), 531 (17%) and 133 (4%). We labelled the clusters as Improvers, Stables, Slow progressors, and Fast progressors based on their mean of MD decline, which were 0.08, -0.06, -0.21, and -0.45 dB/year, respectively. Eyes with fast VF progression had higher baseline age, intraocular pressure (IOP), pattern standard deviation (PSD) and refractive error (RE), but lower central corneal thickness (CCT). Fast progression was associated with calcium channel blockers, being male, heart disease history, diabetes history, African American race, stroke history, and migraine headaches.
Unsupervised Machine Learning. Unsupervised machine learning is a type…
Unsupervised machine learning is a type of machine learning where the model is trained on a dataset without any labeled output. The goal of unsupervised learning is to uncover hidden patterns or relationships in the data. Unsupervised learning is useful when labeled data is not available or when the goal is to discover new relationships in the data. However, it can be more challenging to evaluate the results of unsupervised learning compared to supervised learning, as there is no clear metric to assess the performance of the model. In conclusion, unsupervised learning is a powerful tool for understanding and extracting information from complex and unlabeled data.
Supervised vs Unsupervised Learning Explained - Seldon
Machine learning is already an important part of how modern organisation and services function. Whether in social media platforms, healthcare, or finance, machine learning models are deployed in a variety of settings. But the steps needed to train and deploy a model will differ depending on the task at hand and the data that's available. Supervised and unsupervised learning are examples of two different types of machine learning model approach. They differ in the way the models are trained and the condition of the training data that's required.
Artificial intelligence, Machine learning and its application for SDGs - Sambodhi
Artificial intelligence (AI) can provide computers with the ability to learn and replicate human intelligence, and have intentionality, intelligence, and adaptability as core qualities. Machine learning (ML) focuses on creating programmes that can make predictions to help the decision-making process and is used widely in optimizing search engines, e-commerce sites. In AI, the goal is to create a computer system that mimics the human brain to solve complex problems. In the case of ML, the objective is to create a computer system that can learn from data to explore trends, patterns and predict an outcome. AI helps the computer or the machine think, which means that it will make its own decision without any human intervention. ML can be described as a subset of AI.
- Information Technology > Services > e-Commerce Services (0.57)
- Social Sector (0.38)