classical machine learning
GENO -- GENeric Optimization for Classical Machine Learning
Although optimization is the longstanding, algorithmic backbone of machine learning new models still require the time-consuming implementation of new solvers. As a result, there are thousands of implementations of optimization algorithms for machine learning problems. A natural question is, if it is always necessary to implement a new solver, or is there one algorithm that is sufficient for most models. Common belief suggests that such a one-algorithm-fits-all approach cannot work, because this algorithm cannot exploit model specific structure. At least, a generic algorithm cannot be efficient and robust on a wide variety of problems.
Quantum and Classical Machine Learning in Decentralized Finance: Comparative Evidence from Multi-Asset Backtesting of Automated Market Makers
Chen, Chi-Sheng, Tsai, Aidan Hung-Wen
This study presents a comprehensive empirical comparison between quantum machine learning (QML) and classical machine learning (CML) approaches in Automated Market Makers (AMM) and Decentralized Finance (DeFi) trading strategies through extensive backtesting on 10 models across multiple cryptocurrency assets. Our analysis encompasses classical ML models (Random Forest, Gradient Boosting, Logistic Regression), pure quantum models (VQE Classifier, QNN, QSVM), hybrid quantum-classical models (QASA Hybrid, QASA Sequence, QuantumRWKV), and transformer models. The results demonstrate that hybrid quantum models achieve superior overall performance with 11.2\% average return and 1.42 average Sharpe ratio, while classical ML models show 9.8\% average return and 1.47 average Sharpe ratio. The QASA Sequence hybrid model achieves the highest individual return of 13.99\% with the best Sharpe ratio of 1.76, demonstrating the potential of quantum-classical hybrid approaches in AMM and DeFi trading strategies.
Reviews: GENO -- GENeric Optimization for Classical Machine Learning
The paper presents a new software framework for automatic generation of efficient solvers for a variety of optimization problems. Reviewers uniformly liked the generic approach and the use of automatic differentiation on a symbolic level. Based on the consensus, the paper is accepted, and we hope the authors will implement the suggestions provided in the reviews.
GENO -- GENeric Optimization for Classical Machine Learning
Although optimization is the longstanding, algorithmic backbone of machine learning new models still require the time-consuming implementation of new solvers. As a result, there are thousands of implementations of optimization algorithms for machine learning problems. A natural question is, if it is always necessary to implement a new solver, or is there one algorithm that is sufficient for most models. Common belief suggests that such a one-algorithm-fits-all approach cannot work, because this algorithm cannot exploit model specific structure. At least, a generic algorithm cannot be efficient and robust on a wide variety of problems.
Quantum Data Encoding: A Comparative Analysis of Classical-to-Quantum Mapping Techniques and Their Impact on Machine Learning Accuracy
This research explores the integration of quantum data embedding techniques into classical machine learning (ML) algorithms, aiming to assess the performance enhancements and computational implications across a spectrum of models. We explore various classical-to-quantum mapping methods, ranging from basis encoding, angle encoding to amplitude encoding for encoding classical data, we conducted an extensive empirical study encompassing popular ML algorithms, including Logistic Regression, K-Nearest Neighbors, Support Vector Machines and ensemble methods like Random Forest, LightGBM, AdaBoost, and CatBoost. Our findings reveal that quantum data embedding contributes to improved classification accuracy and F1 scores, particularly notable in models that inherently benefit from enhanced feature representation. We observed nuanced effects on running time, with low-complexity models exhibiting moderate increases and more computationally intensive models experiencing discernible changes. Notably, ensemble methods demonstrated a favorable balance between performance gains and computational overhead. This study underscores the potential of quantum data embedding in enhancing classical ML models and emphasizes the importance of weighing performance improvements against computational costs. Future research directions may involve refining quantum encoding processes to optimize computational efficiency and exploring scalability for real-world applications. Our work contributes to the growing body of knowledge at the intersection of quantum computing and classical machine learning, offering insights for researchers and practitioners seeking to harness the advantages of quantum-inspired techniques in practical scenarios.
- Research Report > New Finding (0.86)
- Research Report > Experimental Study (0.66)
MNISQ: A Large-Scale Quantum Circuit Dataset for Machine Learning on/for Quantum Computers in the NISQ era
Placidi, Leonardo, Hataya, Ryuichiro, Mori, Toshio, Aoyama, Koki, Morisaki, Hayata, Mitarai, Kosuke, Fujii, Keisuke
We introduce the first large-scale dataset, MNISQ, for both the Quantum and the Classical Machine Learning community during the Noisy Intermediate-Scale Quantum era. MNISQ consists of 4,950,000 data points organized in 9 subdatasets. Building our dataset from the quantum encoding of classical information (e.g., MNIST dataset), we deliver a dataset in a dual form: in quantum form, as circuits, and in classical form, as quantum circuit descriptions (quantum programming language, QASM). In fact, also the Machine Learning research related to quantum computers undertakes a dual challenge: enhancing machine learning exploiting the power of quantum computers, while also leveraging state-of-the-art classical machine learning methodologies to help the advancement of quantum computing. Therefore, we perform circuit classification on our dataset, tackling the task with both quantum and classical models. In the quantum endeavor, we test our circuit dataset with Quantum Kernel methods, and we show excellent results up to $97\%$ accuracy. In the classical world, the underlying quantum mechanical structures within the quantum circuit data are not trivial. Nevertheless, we test our dataset on three classical models: Structured State Space sequence model (S4), Transformer and LSTM. In particular, the S4 model applied on the tokenized QASM sequences reaches an impressive $77\%$ accuracy. These findings illustrate that quantum circuit-related datasets are likely to be quantum advantageous, but also that state-of-the-art machine learning methodologies can competently classify and recognize quantum circuits. We finally entrust the quantum and classical machine learning community the fundamental challenge to build more quantum-classical datasets like ours and to build future benchmarks from our experiments. The dataset is accessible on GitHub and its circuits are easily run in qulacs or qiskit.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.06)
- Asia > Japan > Honshū > Kantō > Saitama Prefecture > Saitama (0.04)
- (2 more...)
Representation Learning for Appliance Recognition: A Comparison to Classical Machine Learning
Kahl, Matthias, Jorde, Daniel, Jacobsen, Hans-Arno
Non-intrusive load monitoring (NILM) aims at energy consumption and appliance state information retrieval from aggregated consumption measurements, with the help of signal processing and machine learning algorithms. Representation learning with deep neural networks is successfully applied to several related disciplines. The main advantage of representation learning lies in replacing an expert-driven, hand-crafted feature extraction with hierarchical learning from many representations in raw data format. In this paper, we show how the NILM processing-chain can be improved, reduced in complexity and alternatively designed with recent deep learning algorithms. On the basis of an event-based appliance recognition approach, we evaluate seven different classification models: a classical machine learning approach that is based on a hand-crafted feature extraction, three different deep neural network architectures for automated feature extraction on raw waveform data, as well as three baseline approaches for raw data processing. We evaluate all approaches on two large-scale energy consumption datasets with more than 50,000 events of 44 appliances. We show that with the use of deep learning, we are able to reach and surpass the performance of the state-of-the-art classical machine learning approach for appliance recognition with an F-Score of 0.75 and 0.86 compared to 0.69 and 0.87 of the classical approach.
- Europe > United Kingdom (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
Deep Learning or classical Machine Learning -- which one to use for your project?
During the last decade, Deep Learning has received a lot of attention throughout the globe. "Deep Learning is a superpower. With it, you can make a computer see, synthesise novel art, translate languages, render a medical diagnosis, or build pieces of a car that can drive itself. If that isn't a superpower, I don't know what is." Thus, as many know, one needs to carefully select when to use a superpower.
A reusable benchmark of brain-age prediction from M/EEG resting-state signals
Population-level modeling can define quantitative measures of individual aging by applying machine learning to large volumes of brain images. These measures of brain age, obtained from the general population, helped characterize disease severity in neurological populations, improving estimates of diagnosis or prognosis. Magnetoencephalography (MEG) and Electroencephalography (EEG) have the potential to further generalize this approach towards prevention and public health by enabling assessments of brain health at large scales in socioeconomically diverse environments. However, more research is needed to define methods that can handle the complexity and diversity of M/EEG signals across diverse real-world contexts. To catalyse this effort, here we propose reusable benchmarks of competing machine learning approaches for brain age modeling.