
Quantum Reinforcement Learning: the Maze problem Nicola Dalla Pozza,1, 2, 3 Lorenzo Buffoni,2, 3 Stefano Martina,2, 3 and Filippo Caruso2, 3, 4 1Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy 2Department of Physics and Astronomy, University of Florence, via Sansone 1, I-50019, Sesto Fiorentino, Italy 3LENS - European Laboratory for Non-Linear Spectroscopy, via Carrara 1, I-50019, Sesto Fiorentino, Italy 4QSTAR and CNR-INO, I-50019 Sesto Fiorentino, Italy Quantum Machine Learning (QML) is a young but rapidly growing field where quantum informa- tion meets machine learning. Here, we will introduce a new QML model generalizing the classical concept of Reinforcement Learning to the quantum domain, i.e. Quantum Reinforcement Learning (QRL). In particular we apply this idea to the maze problem, where an agent has to learn the optimal set of actions in order to escape from a maze with the highest success probability. To perform the strategy optimization, we consider an hybrid protocol where QRL is combined with classical deep neural networks. In particular, we find that the agent learns the optimal strategy in both the classical and quantum regimes, and we also investigate its behaviour in a noisy envi- ronment. It turns out that the quantum speedup does robustly allow the agent to exploit useful actions also at very short time scales, with key roles played by the quantum coherence and the external noise. This new framework has the high potential to be applied to perform different tasks (e.g. high transmission/processing rates and quantum error correction) in the new-generation Noisy Intermediate-Scale Quantum (NISQ) devices whose topology engineering is starting to become a new and crucial control knob for practical applications in real-world problems. This work is dedicated to the memory of Peter Wittek. I. INTRODUCTION main [18{20], both to gain an advantage over classical machine learning algorithms and to control quantum sys- tems more effectively. Some preliminary results on QRL The broad field of machine learning [1{3] aims to de- have been reported in Refs. [21, 22] and more recently velop computer algorithms that improve automatically for closed (i.e. following unitary evolution) quantum sys- through experience with lots of cross-disciplinary ap- tems in Ref. [23] where the authors have shown quadratic plications from domotic systems to autonomous cars, improvements in learning efficiency by means of a Grover- from face/voice recognition to medical diagnostics. Self- type search in the space of the rewarding actions. During driving systems can learn from data, so as to identify dis- the preparation of this manuscript, Ref. [24] has shown tinctive patterns and make consequently decisions, with how to get quantum speed-ups in reinforcement learn- minimal human intervention. Its three main paradigms ing agents where however the quantumness is only in the are: supervised learning, unsupervised learning and re- type of information that is transferred between the agent inforcement learning (RL). The goal of a supervised and the environment. However, the setting of an agent learning algorithm is to use an output-labeled dataset acting on an environment has a natural analogue in the fx ; y gN , to produce a model that, given a new input i i i=1 framework of open quantum systems [25, 26], where one vector x, can predict its correct label y. Unsupervised can embed the entire RL framework into the quantum do- learning, instead, uses an unlabeled dataset fx gN and i i=1 main, and this has not been investigated in literature yet. aims to extract some useful properties (patterns) from Moreover, one of the authors of this manuscript, inspired the single datapoint or the overall data distribution of the by recent observations in biological energy transport phe- dataset (e.g. clustering). In reinforcement learning [4], nomena [27], has shown in Ref. [28] that one can obtain the learning process relies on the interaction between an a very remarkable improvement in finding a solution of a agent and an environment and defines how the agent per- arXiv:2108.04490v1 [quant-ph] 10 Aug 2021 problem, given in terms of the exit of a complex maze, by forms his actions based on past experiences (episodes). In playing with quantum effects and noise. This improve- this process one of the main problems is how to resolve ment was about five orders of magnitude with respect to the tradeoff between exploration of new actions and ex- the purely classical and quantum regimes for large maze ploitation of learned experience. RL has been applied topologies. In the same work there results were also ex- in many successful tasks, e.g. outperforming humans on perimentally tested by means of an integrated waveguide Atari games [5] and GO [6] and recently it is becoming array, probed by coherent light. popular in the contexts of autonomous driving [7] and neuroscience [8]. Motivated by these previous works, here we define the In recent years lots of efforts have been directed to- building blocks of RL in the quantum domain but in wards developing new algorithms combing machine learn- the framework of open (i.e. noisy) quantum systems, ing and quantum information tools, i.e. in a new research where coherent and noise effects can strongly cooperate field known as quantum machine learning (QML) [9{13], together to achieve a given task. Then we apply it to mostly in the supervised [14{17] and unsupervised do- solve the quantum maze problem that, being a very com- 2 plicated one, can represent a crucial step towards other Reward applications in very different problem-solving contexts. Agent π II. REINFORCEMENT LEARNING State Action In RL the system consists of an agent that operates in an environment and gets information about it, with the Environment ability to perform some actions in order to gain some ad- vantage in the form of a reward. More formally, RL prob- lems are defined by a 5-tuple (S; A; P·(·; ·);R·(·; ·); γ), Observation where S is a finite set of states of the agent, A is a fi- nite set of actions (alternatively, As is the finite set of FIG. 1. Deep Reinforcement Learning Scheme. A Deep Neu- 0 ral Network learns the policy π that the agent uses to perform actions available from the state s), Pa(s; s ) = Pr(st+1 = 0 an action on the environment. A reward and the information s j st = s; at = a) is the probability that action a in state s at time t will lead to the state s0 at time t + 1, about the new state of the system are given back to the agent 0 that improves and learns its policy accordingly. Ra(s; s ) is the immediate reward (or expected immedi- ate reward) received after transitioning from state s to state s0, due to action a, and γ 2 [0; 1] is the discount actions and iteratively reinforcing its policy through the factor balancing the relative importance of present and Bellman equation given the reward obtained after each future rewards. In this setting one can introduce dif- action. A pictorial view of this iterative process can be ferent types of problems, based on the information one found in Fig.1. has at disposal. In multi-armed bandit models, the agent has to maximize the cumulative reward obtained by a sequence of independent actions, each of which giving a stochastic immediate reward. In this case, the state III. QUANTUM MAZE of the system describes the uncertainty of the expected immediate reward for each action. In contextual multi- Here we transfer the RL concepts into the quantum do- armed bandits, the agent faces the same set of actions but main where both the environment and the reward process in multiple scenarios, such that the most profitable ac- follow the laws of quantum mechanics and are affected by tion is scenario-dependent. In a Markov Decision Process both coherent and incoherent mechanisms. We consider, (MDP) the agent has information on the state and the for simplicity, a quantum walker described by a qubit actions have an effect on the state itself. Finally, in par- that is transmitted over a quantum network representing tially observable MDPs the state s is partially observable the RL environment. The RL state is the quantum state or unknown. over the network, represented by the so-called density The goal of the agent is to learn a policy (π) that is a operator ρ. The RL actions are variations of the envi- rule according to which an action is selected. In its most ronment, e.g. its network topology, that will affect the general formulation, the choice of the action at time t system state through a noisy quantum dynamics. The re- can depend on the whole history of agent-environment ward process is obtained from the evolution of the quan- interactions up to t, and is defined as a random vari- tum network and hence associated to some probability able over the set of available actions if such choice is function to maximize. Following the results in Ref. [28] stochastic. A policy is called Markovian if the distri- and just to test this framework on a specific model, we bution depends only on the state at time t, with πt(ajs) consider a perfect maze, i.e., a maze where there is a denoting the probability to choose the action a from such single path connecting the entrance with the exit port. state s, and if a policy does not change over time it The network dynamics is described in terms of a stochas- is referred as stationary [29].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-