Goto

Collaborating Authors

 Date, Prasanna


Adiabatic Quantum Support Vector Machines

arXiv.org Artificial Intelligence

Adiabatic quantum computers can solve difficult optimization problems (e.g., the quadratic unconstrained binary optimization problem), and they seem well suited to train machine learning models. In this paper, we describe an adiabatic quantum approach for training support vector machines. We show that the time complexity of our quantum approach is an order of magnitude better than the classical approach. Next, we compare the test accuracy of our quantum approach against a classical approach that uses the Scikit-learn library in Python across five benchmark datasets (Iris, Wisconsin Breast Cancer (WBC), Wine, Digits, and Lambeq). We show that our quantum approach obtains accuracies on par with the classical approach. Finally, we perform a scalability study in which we compute the total training times of the quantum approach and the classical approach with increasing number of features and number of data points in the training dataset. Our scalability results show that the quantum approach obtains a 3.5--4.5 times speedup over the classical approach on datasets with many (millions of) features.


On-Sensor Data Filtering using Neuromorphic Computing for High Energy Physics Experiments

arXiv.org Artificial Intelligence

This work describes the investigation of neuromorphic computing-based spiking neural network (SNN) models used to filter data from sensor electronics in high energy physics experiments conducted at the High Luminosity Large Hadron Collider. We present our approach for developing a compact neuromorphic model that filters out the sensor data based on the particle's transverse momentum with the goal of reducing the amount of data being sent to the downstream electronics. The incoming charge waveforms are converted to streams of binary-valued events, which are then processed by the SNN. We present our insights on the various system design choices - from data encoding to optimal hyperparameters of the training algorithm - for an accurate and compact SNN optimized for hardware deployment. Our results show that an SNN trained with an evolutionary algorithm and an optimized set of hyperparameters obtains a signal efficiency of about 91% with nearly half as many parameters as a deep neural network.


A Novel Spatial-Temporal Variational Quantum Circuit to Enable Deep Learning on NISQ Devices

arXiv.org Artificial Intelligence

Quantum computing presents a promising approach for machine learning with its capability for extremely parallel computation in high-dimension through superposition and entanglement. Despite its potential, existing quantum learning algorithms, such as Variational Quantum Circuits(VQCs), face challenges in handling more complex datasets, particularly those that are not linearly separable. What's more, it encounters the deployability issue, making the learning models suffer a drastic accuracy drop after deploying them to the actual quantum devices. To overcome these limitations, this paper proposes a novel spatial-temporal design, namely ST-VQC, to integrate non-linearity in quantum learning and improve the robustness of the learning model to noise. Specifically, ST-VQC can extract spatial features via a novel block-based encoding quantum sub-circuit coupled with a layer-wise computation quantum sub-circuit to enable temporal-wise deep learning. Additionally, a SWAP-Free physical circuit design is devised to improve robustness. These designs bring a number of hyperparameters. After a systematic analysis of the design space for each design component, an automated optimization framework is proposed to generate the ST-VQC quantum circuit. The proposed ST-VQC has been evaluated on two IBM quantum processors, ibm_cairo with 27 qubits and ibmq_lima with 7 qubits to assess its effectiveness. The results of the evaluation on the standard dataset for binary classification show that ST-VQC can achieve over 30% accuracy improvement compared with existing VQCs on actual quantum computers. Moreover, on a non-linear synthetic dataset, the ST-VQC outperforms a linear classifier by 27.9%, while the linear classifier using classical computing outperforms the existing VQC by 15.58%.


Quantum Discriminator for Binary Classification

arXiv.org Machine Learning

Quantum computers operate in the high-dimensional tensor product spaces and are known to outperform classical computers on many problems. They are poised to accelerate machine learning tasks in the future. In this work, we operate in the quantum machine learning (QML) regime where a QML model is trained using a quantum-classical hybrid algorithm and inferencing is performed using a quantum algorithm. We leverage the traditional two-step machine learning workflow, where features are extracted from the data in the first step and a discriminator acting on the extracted features is used to classify the data in the second step. Assuming that the binary features have been extracted from the data, we propose a quantum discriminator for binary classification. The quantum discriminator takes as input the binary features of a data point and a prediction qubit in the zero state, and outputs the correct class of the data point. The quantum discriminator is defined by a parameterized unitary matrix $U_\Theta$ containing $\mathcal{O}(N)$ parameters, where $N$ is the number of data points in the training data set. Furthermore, we show that the quantum discriminator can be trained in $\mathcal{O}(N \log N)$ time using $\mathcal{O}(N \log N)$ classical bits and $\mathcal{O}(\log N)$ qubits. We also show that inferencing for the quantum discriminator can be done in $\mathcal{O}(N)$ time using $\mathcal{O}(\log N)$ qubits. Finally, we use the quantum discriminator to classify the XOR problem on the IBM Q universal quantum computer with $100\%$ accuracy.


Training Deep Neural Networks with Constrained Learning Parameters

arXiv.org Machine Learning

Today's deep learning models are primarily trained on CPUs and GPUs. Although these models tend to have low error, they consume high power and utilize large amount of memory owing to double precision floating point learning parameters. Beyond the Moore's law, a significant portion of deep learning tasks would run on edge computing systems, which will form an indispensable part of the entire computation fabric. Subsequently, training deep learning models for such systems will have to be tailored and adopted to generate models that have the following desirable characteristics: low error, low memory, and low power. We believe that deep neural networks (DNNs), where learning parameters are constrained to have a set of finite discrete values, running on neuromorphic computing systems would be instrumental for intelligent edge computing systems having these desirable characteristics. To this extent, we propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), that leverages a coordinate gradient descent-based approach for training deep learning models with finite discrete learning parameters. Next, we elaborate on the theoretical underpinnings and evaluate the computational complexity of CoNNTrA. As a proof of concept, we use CoNNTrA to train deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets and compare their performance to the same models trained using Backpropagation. We use following performance metrics for the comparison: (i) Training error; (ii) Validation error; (iii) Memory usage; and (iv) Training time. Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.


Adiabatic Quantum Optimization Fails to Solve the Knapsack Problem

arXiv.org Artificial Intelligence

In this work, we attempt to solve the integer-weight knapsack problem using the D-Wave 2000Q adiabatic quantum computer. The knapsack problem is a well-known NP-complete problem in computer science, with applications in economics, business, finance, etc. We attempt to solve a number of small knapsack problems whose optimal solutions are known; we find that adiabatic quantum optimization fails to produce solutions corresponding to optimal filling of the knapsack in all problem instances. We compare results obtained on the quantum hardware to the classical simulated annealing algorithm and two solvers employing a hybrid branch-and-bound algorithm. The simulated annealing algorithm also fails to produce the optimal filling of the knapsack, though solutions obtained by simulated and quantum annealing are no more similar to each other than to the correct solution. We discuss potential causes for this observed failure of adiabatic quantum optimization.


Adiabatic Quantum Linear Regression

arXiv.org Machine Learning

A major challenge in machine learning is the computational expense of training these models. Model training can be viewed as a form of optimization used to fit a machine learning model to a set of data, which can take up significant amount of time on classical computers. Adiabatic quantum computers have been shown to excel at solving optimization problems, and therefore, we believe, present a promising alternative to improve machine learning training times. In this paper, we present an adiabatic quantum computing approach for training a linear regression model. In order to do this, we formulate the regression problem as a quadratic unconstrained binary optimization (QUBO) problem. We analyze our quantum approach theoretically, test it on the D-Wave 2000Q adiabatic quantum computer and compare its performance to a classical approach that uses the Scikit-learn library in Python. Our analysis shows that the quantum approach attains up to 2.8x speedup over the classical approach on larger datasets, and performs at par with the classical approach on the regression error metric.


QUBO Formulations for Training Machine Learning Models

arXiv.org Machine Learning

Training machine learning models on classical computers is usually a time and compute intensive process. With Moore's law coming to an end and ever increasing demand for large-scale data analysis using machine learning, we must leverage non-conventional computing paradigms like quantum computing to train machine learning models efficiently. Adiabatic quantum computers like the D-Wave 2000Q can approximately solve NP-hard optimization problems, such as the quadratic unconstrained binary optimization (QUBO), faster than classical computers. Since many machine learning problems are also NP-hard, we believe adiabatic quantum computers might be instrumental in training machine learning models efficiently in the post Moore's law era. In order to solve a problem on adiabatic quantum computers, it must be formulated as a QUBO problem, which is a challenging task in itself. In this paper, we formulate the training problems of three machine learning models---linear regression, support vector machine (SVM) and equal-sized k-means clustering---as QUBO problems so that they can be trained on adiabatic quantum computers efficiently. We also analyze the time and space complexities of our formulations and compare them to the state-of-the-art classical algorithms for training these machine learning models. We show that the time and space complexities of our formulations are better (in the case of SVM and equal-sized k-means clustering) or equivalent (in case of linear regression) to their classical counterparts.