Goto

Collaborating Authors

 Daskin, Ammar


A Simple Quantum Blockmodeling with Qubits and Permutations

arXiv.org Artificial Intelligence

Blockmodeling of a given problem represented by an $N\times N$ adjacency matrix can be found by swapping rows and columns of the matrix (i.e. multiplying matrix from left and right by a permutation matrix). Although classical matrix permutations can be efficiently done by swapping pointers for the permuted rows (or columns) of the matrix, by changing row-column order, a permutation changes the location of the matrix elements, which determines the membership of a group in the matrix based blockmodeling. Therefore, a brute force initial estimation of a fitness value for a candidate solution involving counting the memberships of the elements may require going through all the sum of the rows (or the columns). Similarly permutations can be also implemented efficiently on quantum computers, e.g. a NOT gate on a qubit. In this paper, using permutation matrices and qubit measurements, we show how to solve blockmodeling on quantum computers. In the model, the measurement outcomes of a small group of qubits are mapped to indicate the fitness value. However, if the number of qubits in the considered group is much less than $n=log(N)$, it is possible to find or update the fitness value based on the state tomography in $O(poly(log(N)))$. Therefore, when the number of iterations is less than $log(N)$ time and the size of the considered qubit group is small, we show that it may be possible to reach the solution very efficiently.


A unifying primary framework for quantum graph neural networks from quantum graph states

arXiv.org Artificial Intelligence

Graph states are used to represent mathematical graphs as quantum states on quantum computers. They can be formulated through stabilizer codes or directly quantum gates and quantum states. In this paper we show that a quantum graph neural network model can be understood and realized based on graph states. We show that they can be used either as a parameterized quantum circuits to represent neural networks or as an underlying structure to construct graph neural networks on quantum computers.


Federated learning with distributed fixed design quantum chips and quantum channels

arXiv.org Artificial Intelligence

The privacy in classical federated learning can be breached through the use of local gradient results along with engineered queries to the clients. However, quantum communication channels are considered more secure because a measurement on the channel causes a loss of information, which can be detected by the sender. Therefore, the quantum version of federated learning can be used to provide more privacy. Additionally, sending an $N$ dimensional data vector through a quantum channel requires sending $\log N$ entangled qubits, which can potentially provide exponential efficiency if the data vector is utilized as quantum states. In this paper, we propose a quantum federated learning model where fixed design quantum chips are operated based on the quantum states sent by a centralized server. Based on the coming superposition states, the clients compute and then send their local gradients as quantum states to the server, where they are aggregated to update parameters. Since the server does not send model parameters, but instead sends the operator as a quantum state, the clients are not required to share the model. This allows for the creation of asynchronous learning models. In addition, the model as a quantum state is fed into client-side chips directly; therefore, it does not require measurements on the upcoming quantum state to obtain model parameters in order to compute gradients. This can provide efficiency over the models where the parameter vector is sent via classical or quantum channels and local gradients are obtained through the obtained values of these parameters.


Dimension reduction and redundancy removal through successive Schmidt decompositions

arXiv.org Artificial Intelligence

Quantum computers are believed to have the ability to process huge data sizes which can be seen in machine learning applications. In these applications, the data in general is classical. Therefore, to process them on a quantum computer, there is a need for efficient methods which can be used to map classical data on quantum states in a concise manner. On the other hand, to verify the results of quantum computers and study quantum algorithms, we need to be able to approximate quantum operations into forms that are easier to simulate on classical computers with some errors. Motivated by these needs, in this paper we study the approximation of matrices and vectors by using their tensor products obtained through successive Schmidt decompositions. We show that data with distributions such as uniform, Poisson, exponential, or similar to these distributions can be approximated by using only a few terms which can be easily mapped onto quantum circuits. The examples include random data with different distributions, the Gram matrices of iris flower, handwritten digits, 20newsgroup, and labeled faces in the wild. And similarly, some quantum operations such as quantum Fourier transform and variational quantum circuits with a small depth also may be approximated with a few terms that are easier to simulate on classical computers. Furthermore, we show how the method can be used to simplify quantum Hamiltonians: In particular, we show the application to randomly generated transverse field Ising model Hamiltonians. The reduced Hamiltonians can be mapped into quantum circuits easily and therefore can be simulated more efficiently.


On the explainability of quantum neural networks based on variational quantum circuits

arXiv.org Artificial Intelligence

Ridge functions are used to describe and study the lower bound of the approximation done by the neural networks which can be written as a linear combination of activation functions. If the activation functions are also ridge functions, these networks are called explainable neural networks. In this paper, we first show that quantum neural networks which are based on variational quantum circuits can be written as a linear combination of ridge functions. Consequently, we show that the interpretability and explainability of such quantum neural networks can be directly considered and studied as an approximation with the linear combination of ridge functions.