Recent advances in quantum computing have drawn considerable attention to building realistic application for and using quantum computers. However, designing a suitable quantum circuit architecture requires expert knowledge. For example, it is non-trivial to design a quantum gate sequence for generating a particular quantum state with as fewer gates as possible. We propose a quantum architecture search framework with the power of deep reinforcement learning (DRL) to address this challenge. In the proposed framework, the DRL agent can only access the Pauli-$X$, $Y$, $Z$ expectation values and a predefined set of quantum operations for learning the target quantum state, and is optimized by the advantage actor-critic (A2C) and proximal policy optimization (PPO) algorithms. We demonstrate a successful generation of quantum gate sequences for multi-qubit GHZ states without encoding any knowledge of quantum physics in the agent. The design of our framework is rather general and can be employed with other DRL architectures or optimization methods to study gate synthesis and compilation for many quantum states.
Bharti, Kishor, Cervera-Lierta, Alba, Kyaw, Thi Ha, Haug, Tobias, Alperin-Lea, Sumner, Anand, Abhinav, Degroote, Matthias, Heimonen, Hermanni, Kottmann, Jakob S., Menke, Tim, Mok, Wai-Keong, Sim, Sukin, Kwek, Leong-Chuan, Aspuru-Guzik, Alán
A universal fault-tolerant quantum computer that can solve efficiently problems such as integer factorization and unstructured database search requires millions of qubits with low error rates and long coherence times. While the experimental advancement towards realizing such devices will potentially take decades of research, noisy intermediate-scale quantum (NISQ) computers already exist. These computers are composed of hundreds of noisy qubits, i.e. qubits that are not error-corrected, and therefore perform imperfect operations in a limited coherence time. In the search for quantum advantage with these devices, algorithms have been proposed for applications in various disciplines spanning physics, machine learning, quantum chemistry and combinatorial optimization. The goal of such algorithms is to leverage the limited available resources to perform classically challenging tasks. In this review, we provide a thorough summary of NISQ computational paradigms and algorithms. We discuss the key structure of these algorithms, their limitations, and advantages. We additionally provide a comprehensive overview of various benchmarking and software tools useful for programming and testing NISQ devices.
What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.
Cambridge University researchers have developed a "no-touch touchscreen" that uses artificial intelligence to predict a user's intention before their hand reaches the display. The screen was originally designed for use in cars, but the engineers who built it claim it could also have widespread applications during a pandemic. The "predictive touch" technology can be retrofitted to existing displays and could be used to prevent the spread of pathogens on touchscreens at supermarket check-outs, ATMs and ticket terminals at railway stations. Studies have shown that coronavirus can remain on plastic and glass for anywhere between two hours and a week, meaning touchscreens in public places need to be constantly disinfected to prevent transmission. "Touchscreens and other interactive displays are something most people use multiple times per day, but they can be difficult to use while in motion, whether that's driving a car or hanging the music on your phone while you're running," said Simon Godsill from the university's department of engineering.
However, a new study has described how its status in science fact could actually be employed as another, and perhaps unlikely, form of entertainment -- live music. Dr Alexis Kirke, Senior Research Fellow in the Interdisciplinary Centre for Computer Music Research at the University of Plymouth (UK), has for the first time shown that a human musician can communicate directly with a quantum computer via teleportation. The result is a high-tech jamming session, through which a blend of live human and computer-generated sounds come together to create a unique performance piece. Speaking about the study, published in the current issue of the Journal of New Music Research, Dr Kirke said: "The world is racing to build the first practical and powerful quantum computers, and whoever succeeds first will have a scientific and military advantage because of the extreme computing power of these machines. This research shows for the first time that this much-vaunted advantage can also be helpful in the world of making and performing music. No other work has shown this previously in the arts, and it demonstrates that quantum power is something everyone can appreciate and enjoy."
Life's most valuable asset is health. Continuously understanding the state of our health and modeling how it evolves is essential if we wish to improve it. Given the opportunity that people live with more data about their life today than any other time in history, the challenge rests in interweaving this data with the growing body of knowledge to compute and model the health state of an individual continually. This dissertation presents an approach to build a personal model and dynamically estimate the health state of an individual by fusing multi-modal data and domain knowledge. The system is stitched together from four essential abstraction elements: 1. the events in our life, 2. the layers of our biological systems (from molecular to an organism), 3. the functional utilities that arise from biological underpinnings, and 4. how we interact with these utilities in the reality of daily life. Connecting these four elements via graph network blocks forms the backbone by which we instantiate a digital twin of an individual. Edges and nodes in this graph structure are then regularly updated with learning techniques as data is continuously digested. Experiments demonstrate the use of dense and heterogeneous real-world data from a variety of personal and environmental sensors to monitor individual cardiovascular health state. State estimation and individual modeling is the fundamental basis to depart from disease-oriented approaches to a total health continuum paradigm. Precision in predicting health requires understanding state trajectory. By encasing this estimation within a navigational approach, a systematic guidance framework can plan actions to transition a current state towards a desired one. This work concludes by presenting this framework of combining the health state and personal graph model to perpetually plan and assist us in living life towards our goals.
MLPerf, an emerging machine learning benchmark suite strives to cover a broad range of applications of machine learning. We present a study on its characteristics and how the MLPerf benchmarks differ from some of the previous deep learning benchmarks like DAWNBench and DeepBench. We find that application benchmarks such as MLPerf (although rich in kernels) exhibit different features compared to kernel benchmarks such as DeepBench. MLPerf benchmark suite contains a diverse set of models which allows unveiling various bottlenecks in the system. Based on our findings, dedicated low latency interconnect between GPUs in multi-GPU systems is required for optimal distributed deep learning training. We also observe variation in scaling efficiency across the MLPerf models. The variation exhibited by the different models highlight the importance of smart scheduling strategies for multi-GPU training. Another observation is that CPU utilization increases with increase in number of GPUs used for training. Corroborating prior work we also observe and quantify improvements possible by compiler optimizations, mixed-precision training and use of Tensor Cores.
Data scientists at 20th Century Fox and Google Cloud have developed machine-learning software that can analyze movie trailers and predict how likely people are to see those movies in theaters. A recent preprint research paper breaks down how the program, named Merlin, can now recognize objects and patterns in a trailer to understand movie scenes. Merlin can scan trailers and spot objects like "man with beard," "gun," "car," and decide whether the movie is an action flick or a crime drama based on the context in which those objects appear. "A trailer with a long close-up shot of a character is more likely for a drama movie," the study's authors write, "whereas a trailer with quick but frequent shots is more likely for an action movie." Merlin can use its knowledge of common tropes in trailers to understand how sequences of actions in trailers play into our expectations for genre films.
Later on, it was found that computerized numerical optimization failed to find solutions for the tough problems associated with quantum computing tasks, whereas the human players were successful at it. "The big surprise we had was that some of the players actually had solutions that were of higher quality and of shorter duration than any computer algorithms could find," Jacob Sherson said. "One of the most distinctly human abilities is our ability to forget and to filter out information and that's very important here because we have a problem that's just so complicated you will never be finished if you attack it systematically." It was specifically constructed to turn quantum physics optimization problems into a game. Though this is not the first time that a gamification process has been utilized to turn complex science into simpler interactive activity, Quantum Moves takes the process in a different direction.