Goto

Collaborating Authors

Scientists use reinforcement learning to train quantum algorithm

#artificialintelligence

Recent advancements in quantum computing have driven the scientific community's quest to solve a certain class of complex problems for which quantum computers would be better suited than traditional supercomputers. To improve the efficiency with which quantum computers can solve these problems, scientists are investigating the use of artificial intelligence approaches. In a new study, scientists at the U.S. Department of Energy's (DOE) Argonne National Laboratory have developed a new algorithm based on reinforcement learning to find the optimal parameters for the Quantum Approximate Optimization Algorithm (QAOA), which allows a quantum computer to solve certain combinatorial problems such as those that arise in materials design, chemistry and wireless communications. "Combinatorial optimization problems are those for which the solution space gets exponentially larger as you expand the number of decision variables," said Argonne computer scientist Prasanna Balaprakash. "In one traditional example, you can find the shortest route for a salesman who needs to visit a few cities once by enumerating all possible routes, but given a couple thousand cities, the number of possible routes far exceeds the number of stars in the universe; even the fastest supercomputers cannot find the shortest route in a reasonable time."


Learning to Optimize Variational Quantum Circuits to Solve Combinatorial Problems

arXiv.org Machine Learning

Quantum computing is a computational paradigm with the potential to outperform classical methods for a variety of problems. Proposed recently, the Quantum Approximate Optimization Algorithm (QAOA) is considered as one of the leading candidates for demonstrating quantum advantage in the near term. QAOA is a variational hybrid quantum-classical algorithm for approximately solving combinatorial optimization problems. The quality of the solution obtained by QAOA for a given problem instance depends on the performance of the classical optimizer used to optimize the variational parameters. In this paper, we formulate the problem of finding optimal QAOA parameters as a learning task in which the knowledge gained from solving training instances can be leveraged to find high-quality solutions for unseen test instances. To this end, we develop two machine-learning-based approaches. Our first approach adopts a reinforcement learning (RL) framework to learn a policy network to optimize QAOA circuits. Our second approach adopts a kernel density estimation (KDE) technique to learn a generative model of optimal QAOA parameters. In both approaches, the training procedure is performed on small-sized problem instances that can be simulated on a classical computer; yet the learned RL policy and the generative model can be used to efficiently solve larger problems. Extensive simulations using the IBM Qiskit Aer quantum circuit simulator demonstrate that our proposed RL- and KDE-based approaches reduce the optimality gap by factors up to 30.15 when compared with other commonly used off-the-shelf optimizers.


Can Artificial Intelligence Solve Traffic Issues?

#artificialintelligence

As part of the transportation authorities' efforts to address this problem, researchers from across the US Department of Energy's (DOE) Argonne National Laboratory in collaboration with the Lawrence Berkeley National Laboratory (LBNL) have developed a new artificial intelligence model to help alleviate congestion on the city's streets. That data was then used to train a model to forecast traffics, congestion spots, and average speed of cars on the routes. The new model can look at the past hour, and then predict the next hour of traffic with great accuracy within milliseconds. "The AI and supercomputing capabilities that have been used in this work allow us to tackle really large problems. The scale of this project is large, and this amount of data requires an equally large computing resource to tackle it," said Prasanna Balaprakash, a computer scientist in Argonne National Laboratory.


Reinforcement-Learning-Based Variational Quantum Circuits Optimization for Combinatorial Problems

arXiv.org Machine Learning

Quantum computing exploits basic quantum phenomena such as state superposition and entanglement to perform computations. The Quantum Approximate Optimization Algorithm (QAOA) is arguably one of the leading quantum algorithms that can outperform classical state-of-the-art methods in the near term. QAOA is a hybrid quantum-classical algorithm that combines a parameterized quantum state evolution with a classical optimization routine to approximately solve combinatorial problems. The quality of the solution obtained by QAOA within a fixed budget of calls to the quantum computer depends on the performance of the classical optimization routine used to optimize the variational parameters. In this work, we propose an approach based on reinforcement learning (RL) to train a policy network that can be used to quickly find high-quality variational parameters for unseen combinatorial problem instances. The RL agent is trained on small problem instances which can be simulated on a classical computer, yet the learned RL policy is generalizable and can be used to efficiently solve larger instances. Extensive simulations using the IBM Qiskit Aer quantum circuit simulator demonstrate that our trained RL policy can reduce the optimality gap by a factor up to 8.61 compared with other off-the-shelf optimizers tested.


Researchers at Argonne are developing the deep learning framework MaLTESE (Machine Learning Tool for Engine Simulations and Experiments) to meet ever-increasing demands to deliver better engine performance, fuel economy and reduced emissions.

#artificialintelligence

Utilizing ALCF supercomputing resources, Argonne researchers are developing the deep learning framework MaLTESE with autonomous -- or self-driving -- and cloud-connected vehicles in mind. This work could help meet demand to deliver better engine performance, fuel economy and reduced emissions. Researchers used nearly the full capacity of the ALCF's Theta system to simulate a typical 25-minute drive cycle of 250,000 vehicles. Researchers at Argonne are developing the deep learning framework MaLTESE (Machine Learning Tool for Engine Simulations and Experiments) to meet ever-increasing demands to deliver better engine performance, fuel economy and reduced emissions. Automotive manufacturers are facing an ever-increasing demand to deliver better engine performance, fuel economy and reduced emissions.