Goto

Collaborating Authors

Results


Uncovering Instabilities in Variational-Quantum Deep Q-Networks

arXiv.org Artificial Intelligence

Deep Reinforcement Learning (RL) has considerably advanced over the past decade. At the same time, state-of-the-art RL algorithms require a large computational budget in terms of training time to converge. Recent work has started to approach this problem through the lens of quantum computing, which promises theoretical speed-ups for several traditionally hard tasks. In this work, we examine a class of hybrid quantumclassical RL algorithms that we collectively refer to as variational quantum deep Q-networks (VQ-DQN). We show that VQ-DQN approaches are subject to instabilities that cause the learned policy to diverge, study the extent to which this afflicts reproduciblity of established results based on classical simulation, and perform systematic experiments to identify potential explanations for the observed instabilities. Additionally, and in contrast to most existing work on quantum reinforcement learning, we execute RL algorithms on an actual quantum processing unit (an IBM Quantum Device) and investigate differences in behaviour between simulated and physical quantum systems that suffer from implementation deficiencies. Our experiments show that, contrary to opposite claims in the literature, it cannot be conclusively decided if known quantum approaches, even if simulated without physical imperfections, can provide an advantage as compared to classical approaches. Finally, we provide a robust, universal and well-tested implementation of VQ-DQN as a reproducible testbed for future experiments.


Raspberry Pi とTensorFlow ではじめるAI・IoTアプリ開発入門

#artificialintelligence

2018年8月、Google BrainチームはTensorFlow 1.10をリリースし、Raspberry Pi(Raspbian)に正式対応しました。ラズベリーパイでディープラーニング・IoTにチャレンジしましょう!


Quantum Machine Learning- An Intuitive Introduction

#artificialintelligence

In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. This course would enable you to gain insight into the realm of Quantum Computing. The students would be able to learn and develop expertise in Quantum algorithms, gates and implementation of these codes. The undergraduate students would particularly find it very imperative and for realizing their final year projects and reports. Furthermore, this course is an introduction to the fundamental concepts of quantum circuits and algorithms.


Quantum Architecture Search via Continual Reinforcement Learning

arXiv.org Artificial Intelligence

Quantum computing has promised significant improvement in solving difficult computational tasks over classical computers. Designing quantum circuits for practical use, however, is not a trivial objective and requires expert-level knowledge. To aid this endeavor, this paper proposes a machine learning-based method to construct quantum circuit architectures. Previous works have demonstrated that classical deep reinforcement learning (DRL) algorithms can successfully construct quantum circuit architectures without encoded physics knowledge. However, these DRL-based works are not generalizable to settings with changing device noises, thus requiring considerable amounts of training resources to keep the RL models up-to-date. With this in mind, we incorporated continual learning to enhance the performance of our algorithm. In this paper, we present the Probabilistic Policy Reuse with deep Q-learning (PPR-DQL) framework to tackle this circuit design challenge. By conducting numerical simulations over various noise patterns, we demonstrate that the RL agent with PPR was able to find the quantum gate sequence to generate the two-qubit Bell state faster than the agent that was trained from scratch. The proposed framework is general and can be applied to other quantum gate synthesis or control problems -- including the automatic calibration of quantum devices.


A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays

arXiv.org Artificial Intelligence

In the last few years, research and development on Deep Learning models and techniques for ultra-low-power devices in a word, TinyML has mainly focused on a train-then-deploy assumption, with static models that cannot be adapted to newly collected data without cloud-based data collection and fine-tuning. Latent Replay-based Continual Learning (CL) techniques[1] enable online, serverless adaptation in principle, but so farthey have still been too computation and memory-hungry for ultra-low-power TinyML devices, which are typically based on microcontrollers. In this work, we introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power (PULP) processor. We rethink the baseline Latent Replay CL algorithm, leveraging quantization of the frozen stage of the model and Latent Replays (LRs) to reduce their memory cost with minimal impact on accuracy. In particular, 8-bit compression of the LR memory proves to be almost lossless (-0.26% with 3000LR) compared to the full-precision baseline implementation, but requires 4x less memory, while 7-bit can also be used with an additional minimal accuracy degradation (up to 5%). We also introduce optimized primitives for forward and backward propagation on the PULP processor. Our results show that by combining these techniques, continual learning can be achieved in practice using less than 64MB of memory an amount compatible with embedding in TinyML devices. On an advanced 22nm prototype of our platform, called VEGA, the proposed solution performs onaverage 65x faster than a low-power STM32 L4 microcontroller, being 37x more energy efficient enough for a lifetime of 535h when learning a new mini-batch of data once every minute.


Learn all about Arduino, Raspberry Pi, and more with this online course bundle

Mashable

TL;DR: The 2021 Raspberry Pi and Arduino Bootcamp Bundle is on sale for £14.38 as of June 29, saving you 97% on list price. Even if you have no experience, the five-course Raspberry Pi and Arduino bootcamp will help you get started learning about programming and robotics. It's designed for complete beginners and walks you through Robot Operating System (ROS) basics first and foremost so that you can create powerful and scalable robot applications. Then you can apply those skills in the Raspberry Pi For Beginners and Arduino for Beginners courses. Each course is hands-on and takes you step by step through the basics of your first projects.


Master Raspberry Pi with this set of online classes

Mashable

TL;DR: The Raspberry Pi Mastery Bundle is on sale for £24.75 as of Jan. 29, saving you 96% on list price. Raspberry Pi is a tiny, inexpensive, single-board computer that you can use to make cool Internet of Things (IoT) projects. It's super bare bones (it doesn't even come with a keyboard, mouse, or case) but it's wildly popular due to its low cost, ease of use, and versatility. If you're itching to join the Raspberry party and start making your own crazy IoT contraptions, the Raspberry Pi Mastery Bundle is a great place to start. These eight courses will teach you how to build a variety of interesting IoT projects, and you'll learn a thing or two about coding in the process.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


Boosting on the shoulders of giants in quantum device calibration

arXiv.org Machine Learning

Traditional machine learning applications, such as optical character recognition, arose from the inability to explicitly program a computer to perform a routine task. In this context, learning algorithms usually derive a model exclusively from the evidence present in a massive dataset. Yet in some scientific disciplines, obtaining an abundance of data is an impractical luxury, however; there is an explicit model of the domain based upon previous scientific discoveries. Here we introduce a new approach to machine learning that is able to leverage prior scientific discoveries in order to improve generalizability over a scientific model. We show its efficacy in predicting the entire energy spectrum of a Hamiltonian on a superconducting quantum device, a key task in present quantum computer calibration. Our accuracy surpasses the current state-of-the-art by over $20\%.$ Our approach thus demonstrates how artificial intelligence can be further enhanced by "standing on the shoulders of giants."


Skilling for the future that has already arrived - Microsoft News Center Canada

#artificialintelligence

There's no denying the growing skills gap that currently looms over our workforce. The good news is that awareness is increasing. Business leaders and institutions recognize the fundamental need to invest in skills training programs for their people to stay competitive in today's digital economy. Unfortunately, while the skills gap challenge is well established, few are taking action, and the solutions are not moving quickly enough. In 2020, we can expect 200,000 tech jobs to go unfilled in Canada, according ICTC.