The Holy Grail of Quantum Artificial Intelligence: Major Challenges in Accelerating the Machine Learning Pipeline

Thomas Gabor1, Leo Sünkel1, Fabian Ritz1, Thomy Phan1, Lenz Belzner2, Christoph Roch1, Sebastian Feld1, Claudia Linnhoff-Popien1 [email protected] 1LMU Munich 2MaibornWolff ABSTRACT We discuss the synergetic connection between and artificial intelligence. After surveying current approaches to quantum artificial intelligence and relating them to a formal model for machine learning processes, we deduce four major challenges for the future of quantum artificial intelligence: (i) Replace iterative training with faster quantum algorithms, (ii) distill the experience of larger amounts of data into the training process, (iii) allow quantum and classical components to be easily combined and exchanged, and Figure 1: The ensemble development life cycle (image taken (iv) build tools to thoroughly analyze whether observed benefits from [26]) provides a framework for the integration of de- really stem from quantum properties of the algorithm. sign time and run-time evolution of software systems. CCS CONCEPTS From here on, we take a software engineer’s perspective and • Software and its engineering → Software creation and man- first analyze how traditional software engineering techniques are agement; • Computing methodologies → Artificial intelli- coping with the dynamic world of AI algorithms (Section 2). We gence; Machine learning; • Hardware → Quantum compu- proceed by pointing out which big challenges AI is going to have tation. to face and what that might mean for the field (Section 3). Then we KEYWORDS provide a quick overview of what QAI is already doing (Section 4) quantum computing, artificial intelligence, software engineering and deduce the challenges revealing what QAI might mean for the general development of AI (Section 5). We close with a very brief ACM Reference Format: outlook (Section 6). Thomas Gabor1, Leo Sünkel1, Fabian Ritz1, Thomy Phan1, Lenz Belzner2, Christoph Roch1, and Sebastian Feld1, Claudia Linnhoff-Popien1. 2020. The Holy Grail of Quantum Artificial Intelligence: Major Challenges in 2 SOFTWARE ENGINEERING FOR Accelerating the Machine Learning Pipeline. In IEEE/ACM 42nd Interna- ARTIFICIAL INTELLIGENCE tional Conference on Software Engineering Workshops (ICSEW’20), May 23– Artificial intelligence on its own has been difficult to grasp forthe 29, 2020, Seoul, Republic of Korea. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3387940.3391469 art of software engineering, perhaps because traditional software engineering is focusing on preserving initial consistency (i.e., mak- 1 MOTIVATION ing sure the produced artifacts adhere to prior specifications)21 [ ] while methods of artificial intelligence usually start from highly Two frontiers of research in computer science meet in the field chaotic initial configurations [7] and only gradually introduce rules of quantum artificial intelligence (QAI). As both artificial intelli- and structure. On the path towards applying the principles of rig- gence (AI) and quantum computing (QC) are very active fields with orous engineering to more complex, adaptive and inherently self- arXiv:2004.14035v1 [quant-ph] 29 Apr 2020 an overwhelming speed of new developments just within the last governed systems, various directions of research have been pro- year [12, 50], there exist vast possibilities for interactions. How- posed and tried (see [14, 22, 40, 51] and many others). As an example ever, we argue that there are grand challenges which can guide us for these approaches, consider Figure 1: Classical methods of soft- towards a fruitful convergence of these fields. ware engineering are kept as a feedback loop being driven forward usually by human developers while new ways to evolve the system Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed at run-time are added as another feedback loop, usually driven by for profit or commercial advantage and that copies bear this notice and the full citation self-adaption and learning [26]. on the first page. Copyrights for components of this work owned by others than the However, there is a wide variety of algorithms allowing for self- author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission adaptation and learning, ranging from simple statistical methods and/or a fee. Request permissions from [email protected]. like SVMs or clustering to deep neural networks, and the exact way ICSEW’20, May 23–29, 2020, Seoul, Republic of Korea to integrate these algorithms is also subject to a lot of variation. © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-7963-2/20/05...$15.00 In [22] we introduced the machine learning pipeline as a process https://doi.org/10.1145/3387940.3391469 model for many different machine learning methods, i.e., itisa

This is a preprint of a paper accepted at the 1st International Workshop on Quantum Software Engineering (Q-SE) at ICSE 2020 and soon to be published in the corresponding proceedings. ICSEW’20, May 23–29, 2020, Seoul, Republic of Korea Gabor et al.

Figure 2: The machine learning pipeline (image taken from [22]). model of temporal dependences between the creation of fundamen- 3 THE ROLE OF COMPUTE AND THE tal artifacts common to most machine learning models. Figure 2 CONSEQUENCES shows a diagram of the different tasks that are relevant to software Aside from similar external properties like stochasticity, quantum engineering. We aim to adopt various familiar development phases algorithms and artificial intelligence may indeed form an even from classical software engineering (blue boxes at the top level). stronger connection. The emerging field of quantum artificial intel- For software engineering, there is a distinct shift between writing ligence (QAI) uses quantum algorithms or quantum-inspired algo- the software for a variety of concrete domains and specializing rithms to solve computation tasks related to artificial intelligence.1 (a single branch of) the software to a concrete environment (blue This combination may be highly synergetic for two main reasons: boxes at the bottom level). The main engineering tasks are shown in white boxes. A source of great difficulty for engineering lies • All machine learning methods need some randomness to in the inherent stochasticity of the behavior generated by most work, often putting serious effort into generating necessary machine learning algorithms, requiring different methods to ensure entropy. Beyond that, they also often show high tolerance quality of service (QoS) in the main training feedback loop (“select for noise during their evolution. This makes them inherently model/policy”, “train”, “assess QoS”) and actual monitoring during suitable for early applications using only NISQ hardware. operations. The break from most classical software engineering • Progress in artificial intelligence is becoming more and more approaches happens here insofar we explicitly want to achieve demanding in computational resources. This trend is out- “softer” behavior guidelines on the algorithms because we want growing the continued increase in available computing power to employ them in domains where we cannot possibly formulate by a large margin. enough “hard” rules. And as we still want specific behavior, the The first reason basically falls in line with our point on stochastic- softening does not occur as random noise but is often very specific ity made earlier. While high noise levels (as they are present in NISQ as well. Of course, this also often makes machine learning methods machines) are unwanted for many algorithms, especially AI algo- susceptible to systematic failures. For instance: A car driving soft- rithms may actually benefit from (some levels of) noise. Of course, ware failing at random on one in every 100 turns is rather easy to current noise levels over a long series of computations are way too care for by adding redundancy and a voting system, e.g., allowing high to even allow for meaningful results, but requirements for QAI us to achieve arbitrarily low overall error rates as long as we can algorithms might be met earlier than for (for example) Grover’s add enough redundancy. A car driving software that operates well search on similarly large input spaces. but systematically breaks down every time it comes across a foot- The second reason may be a bit more elusive; of course, more ball on the street is harder to handle because we need very specific computational power is always better. However, pushing the bor- tests to even detect the error and then see every redundant system ders of AI has been especially hungry for resources. Amodei and fail at the same time. Hernandez [10] used the chart shown in Figure 3 to demonstrate This inherent stochasticity in machine learning algorithms, of that just in recent years, the computation power used for AI break- course, is not quite unlike the inherent stochasticity in quantum through had a doubling time of 3.5 months and has thus been computing, which is a connection already discussed (and elaborated) dramatically outgrowing Moore’s Law (18 month doubling time). by Wolfram [55]. For us, this may suggest that we can use similar methods to integrate (especially highly error-prone, early) quantum 1Note that the combination also works the other way around, using AI methods to algorithms into classical software as we can use to integrate the better approximate quantum computations (for instance in [20, 44]). This, however, is highly stochastic process called machine learning algorithms. beyond the scope of this paper. Holy Grail of Quantum Artificial Intelligence ICSEW’20, May 23–29, 2020, Seoul, Republic ofKorea

which we present a selection of key algorithms. For an overview, please refer to Table 1. We roughly identified four interesting areas of application for QAI algorithms: (1) Mathematical operations. These algorithms provide faster solutions to computationally hard problems like computing eigenvalues (variational quantum eigensolver [43]) or solv- ing linear equations (HHL [24]). While these tasks are not traditionally part of machine learning, they form the basis for many models and operations used in machine learning. Any practical acceleration on that front might thus have a huge impact on QAI. (2) Traditional machine learning. These approaches are based on methods from more traditional branches of machine learning. They train models like clusters or support vector machines (SVMs) to gain or extrapolate information from given data sets. As these models have been developed (and run) a few Figure 3: Computation power used for recent breakthroughs decades ago they are usually not as computationally expen- in AI over time shows a fast exponential growth (image sive as other approaches of AI and might thus be fit to test taken from [10]). QAI on limited machines. (3) Optimization. Algorithms of this group are given a specific optimization problem (usually formulated as a goal or fitness For the future of AI, this directly leads to four possible conse- function over a specific input space) and aim to return the quences (or any combination thereof): globally optimal point in the input space. Purely quantum (1) Progress in AI research slows down. methods differ from classical stochastic optimization in that (2) AI research becomes exponentially more expensive. they are usually guaranteed to find the global optimum un- (3) New AI algorithms using less resources are developed. der ideal conditions. In real-world implementations, they, (4) New sources of computation power are discovered. too, yield stochastic results. The quantum approximate opti- While Consequence 1 is not entirely unlikely, it is probably the mization algorithm (QAOA) [19] is able to optimize on gate better option from a scientific point of view not to strive for that model hardware. Quadratic unconstrained binary optimiza- direction. Similarly, the extent to which exponentially more money tion (QUBO) is equivalent to Ising spin glasses and both are for AI research (i.e., Consequence 2) can compensate the lack of the canonical optimization problems for commercially avail- computation power per chip is quite limited as we are facing an able quantum annealers. They are a specific hardware plat- exponential demand to be satisfied. form designed to perform quantum annealing [27], which is Consequence 3 definitely should be sought for, however. Making a stochastic optimization algorithm based on adiabatic quan- AI algorithms more resource-efficient is imperative for many practi- tum computing (the exact ideal-condition algorithm) [36]. cal applications and is a rather lively topic of research [16, 38]. Most (4) Neural machine learning. These algorithms are based on interestingly, this puts AI algorithms, again, in a similar position more modern concepts of machine learning, which usually like quantum algorithms nowadays, where working around limited use neural networks for models; that means their models hardware (albeit at an entirely different scale) is one of the key are quite intransparent (i.e., hard to verify) and vastly over- skills on bringing software to practice. However, leveraging raw parametrized. Some approaches opt for Boltzmann machines computational power, i.e., using the method compute, will probably (BMs) [9, 53] as a model representation (since especially re- always be a large part of using AI methods and possibly should be, stricted BMs are easier to train). Others incorporate quantum as Sutton recently argued in an influential blog post [49]. computers in just select parts of larger neural-network-based Lastly, Consequence 4 suggests that new hardware might miti- architectures like Autoencoders [29, 47], Generative Adver- gate the increasing need for computational power. For some time sarial Networks (GANs) [17, 33, 46, 56], or reinforcement now, we have seen this idea being implemented by using graphic learning (RL) agents [18, 39]. Here, Quantum RL [18] is es- cards and even more specialized hardware like neuromorphic chips pecially interesting since it largely differs from classical RL to run neural networks for AI. And while substantial benefits can and substitutes various artifacts and operations with their be achieved, none of these hardware platforms can provide an expo- quantum analogue. nential speedup that can sustainably satisfy the exponential hunger In Table 1 we also annotated all described algorithms with the of AI; but quantum computing might [12]. QC platform they are run on as well as direct references to im- plementations where they were available. We attempted to guess 4 OVERVIEW OF QUANTUM-ASSISTED when some algorithms will be ready for practical use based on their ARTIFICIAL INTELLIGENCE fitness to be run on NISQ devices. However, not for all algorithms In order to assess the current possibilities of QAI, we performed a a valid estimate was possible yet. If predictions are right, NISQ- survey on available QAI algorithms in scientific literature, out of ready algorithms will probably be the first ones to make an impact ICSEW’20, May 23–29, 2020, Seoul, Republic of Korea Gabor et al. in real-world QAI. Lastly, we matched all QAI algorithms to the machine learning pipeline (cf. Figure 2) and denoted the tasks of the machine learning pipeline that were actually executed using quantum computing. This usually meant that all other tasks are still performed on classical hardware. Even in this small sample set of algorithms, we can notice that quantum algorithms can be used in various places throughout the machine learning pipeline. Naturally, they are focused on com- putationally expensive tasks. These can mainly be found when modeling the domain, which often means modeling complex proba- bility distributions and sampling from them, and when performing the training, which usually means executing rather lengthy update Figure 4: The performance of modern deep learning meth- operations on matrices or similar data structures. However, we can ods compared to more traditional machine learning depend- also observe that QAI algorithms naturally are hybrid approaches: ing on the amount of available training data (image taken A lot of classical steps within the machine learning pipeline are still from [8]). necessary to produce the results.

5 CHALLENGES OF QUANTUM-ASSISTED That the extremely limited memory capacities of current quan- ARTIFICIAL INTELLIGENCE tum computers are one of the main bottlenecks for practical appli- cations is well-known among the quantum computing. However, Having analyzed the needs of AI and the current state of QAI, we for QAI algorithms, especially the more modern ones, this problem can use this background knowledge and derive major challenges for is aggravated as only through processing very large amounts of future developments in QAI. Note that unlike other work [35, 42] data modern AI algorithms really shine [8]. Figure 4 shows a simple that formulates challenges in quantum artificial intelligence, we sketch of that behavior. Effectively, the need to process relatively focus less on quantum-technical challenges but on the changes to large amounts of training data might even, in the long run, prevent the development methods that need to be achieved. us from cutting out the iterative training loop. Challenge 1 (The Feedback Loop). Replace the feedback loop Challenge 2 (The Training Data). Provide means to process around training (consisting of the tasks “Select Model/Policy”, “Train”, (the essence of) large amounts of data on quantum computers. and “Assess QoS”) entirely with a quantum algorithm. Note that for QAI, we might take a workaround here: Using the When performing machine learning, a lot of time is usually spent right hybrid approaches we might be able to construct classical in training, which usually means fine-tuning a set of parameters in pre-/postprocessing steps so that we can still process large amounts small gradual steps over many iterations. These iterations are often of data without processing all of them on the quantum machine. necessary as they incorporate slightly different (sets of) data points Early approaches like Quantum-enhanced RL [39] have improved into the final model. Here, quantum approaches might not treat classical training by doing a preselection of training samples (using training iterations as a sequence of steps but maybe perform all a quantum algorithm). Similar approaches could work to reduce training iterations in superposition und thus taking a huge short- the necessary training data for quantum training steps as well. cut in training a machine learning model. However, none of the From these considerations we can already see that the combina- surveyed approaches managed to replace such large parts of the ma- tion and hybridization of various algorithms and techniques might chine learning pipeline by quantum approaches, perhaps because be key to further developing QAI. However, combinations always real(istic) quantum machines only provide relatively small coher- include additional free parameters: What algorithms do we use? ence times. Quantum RL [18] probably comes closest by performing How and when do they interact? What domains is a specific com- both the action execution and the resulting update in single run bination good for? Furthermore, we do not only need to combine on the quantum machine, but the algorithm still requires many different techniques, but these techniques often stem from different iterations of training overall. If possible at all, stepping away from fields of science and engineering. That means that even for arela- iterative training might be the single biggest performance increase tively standard QAI algorithm, we might require expert knowledge quantum computing could offer for AI. Thus, we might refer toThe about quantum computing and the platforms it is run on, about AI Feedback Loop Challenge as the “Holy Grail of Quantum AI”. and classical optimization, and about the domain at hand in order Nonetheless, other challenges persist and might be detrimental to make the right calls. to achieving this highest of goals. Considering the multitude of QAI Challenge 3 (The Interfaces). Provide standardized interfaces algorithms focusing on the domain model, we see that quantum- that allow for dynamic combination of QAI components and (by ex- based representations can be used as models for physical domains tension) for experts of different fields to collaborate on QAI algorithms. (where they are a natural fit), complex stochastic domain (where they can approximate complex probability distributions cheaper Standardization is a goal that is often called for throughout vari- and more precisely) and small domains in general (where quantum- ous disciplines of science and engineering. However, QAI brings based or quantum-assisted modeling of the domain might yield together two largely separate field, which in their own right de- some benefits further along the pipeline). velop rapidly and have produced little standardization. It thus be Holy Grail of Quantum Artificial Intelligence ICSEW’20, May 23–29, 2020, Seoul, Republic ofKorea

Algorithm/Task QC platform Impl. available NISQ Quantum tasks in ML pipeline Variational quantum eigensolver [43] Gate model PennyLane [41] Yes Data/Domain, Use Policy HHL [24] Gate model Qiskit [1] Unlikely [45] Data/Domain, Train Clustering [6] Gate model - No? Data/Domain, Use Policy Clustering [32] Gate model - Yes? Data/Domain, Use Policy Quantum nearest-neighbor [52] Gate model - - Data/Domain, Use Policy Recommendation system [28] Gate model - Unlikely [45] Data/Domain, Use Policy SVM [25] Gate model Qiskit [5] Yes Data/Domain, Use Policy SVM [54] Quantum annealing - - Data/Domain, Use Policy QAOA [19] Gate model PennyLane [2] Yes Train QUBO / Ising spin glasses [23, 34] Quantum annealing D-WAVE [37] Yes Train Quantum-assisted EA [30] Quantum annealing - - Train Quantum BM [53] Gate model - Yes Train Quantum BM [9] Quantum annealing - - Train Autoencoder [47] Gate model [48] Yes Train Autoencoder [29] Quantum annealing - - Train Quantum GAN [17, 33] Gate model PennyLane [3] Yes Data/Domain Quantum GAN [46] Gate model - Yes Data/Domain Quantum GAN [56] Gate model Qiskit [4] Yes Data/Domain Quantum-enhanced RL [39] Quantum annealing - - Train Quantum RL [18] Gate model - - Train, Use Policy Table 1: Selection of QAI algorithms. imperative to organize the interfaces between AI and QC without over comparable methods. However, especially in the field of AI it fixed technological standards but based on the involved experts is easy to construct a superior AI model by accident: A few lucky of different expertise31 [ ]. An important part of this challenge is random numbers in the stochastic training process might result in to allow standard software engineering to catch up with recent a better performing AI. Or any part of a QAI algorithm (made up of developments: Especially smaller groups will not be able to afford various classical parts as well) might just match the current (state dedicated experts in QC and much less QAI. Instead software de- of the) domain the right way. velopers should be able to use QAI as seamlessly as they are able to The more complex QAI algorithms become, the harder it might use parallel computing in the cloud now, being able to benefit from be to find a fair comparison in the purely classical world. Still,we advantages without the need to dive into the technical specifics. need to provide researchers and developers in the field of QAI with For QC, this challenge requires a degree of technical maturity the right tools to easily trace the significance and the reason of that is as of yet not reached by most practical frameworks, even perceived advantages in comparison to other algorithms. If QC is though recent developments definitely aim towards making QC eventually going to benefit AI, we need to be able to know exactly technology more accessible. As a lot of effort is put into QCby when and for what reason. vendors wanting to sell their applications, the independent devel- opment of open standards is required to prevent vendor lock-in 6 OUTLOOK and enable QAI applications that span different QC platforms. In this paper we took a long tour from the challenges AI already poses to software engineering to the even more peculiar challenges Challenge 4 (The Real Reason). Keep track of the source of that QAI poses to software engineering. Still, we argued that QC observed improvements. may greatly help in alleviating the problems the development of Even classical machine learning models can often be treated as increasingly better AI is going to face in the upcoming years. On nothing more than a black box; even though they are deterministic the flip side, AI methods with inherent robustness to noise might and mathematically well understood, they just encode a behavior or be an ideal testbed for early NISQ applications. connections between input and output that are too complex to trace We defined four major challenges that stand without any claim without extreme computational effort. This is why in recent years, to completeness. On the contrary, we expect every researcher in AI researchers showed increased interest in methods of testing and the field to be able to add quite a few more. However, we feelthat verifying the performance of AI [11, 13, 15]. the analysis of the projected future developments in AI and the For QAI, this black box property may be enforced by nature: current state of the art in QAI allowed us to deduce some of the We physically cannot introspect the probability distribution of most ambitious goals to tackle. states of a quantum machine while it is computing. That is all We hope that discussing these highly aimed challenges benefits the more reason why we need quantum-appropriate testing and the development of the young field of QAI and are confident that verification. Under this light, it is rather curious that we foundno future research will (purposefully or inadvertently) make progress QAI algorithms that specifically tackle the last few tasks ofthe with respect to these challenges. machine learning pipeline, especially “Monitor QoS”, which should be of utmost importance to practical applications. ACKNOWLEDGMENTS Challenge 4, however, focuses on the reason why we need espe- This work was supported by the Federal Ministry of Economic cially thorough testing in QAI: We need to constantly justify using Affairs and Energy, Germany, as part of the PlanQK project de- a quantum machine. QAI will only have a success if the quantum veloping a platform and ecosystem for quantum-assisted artificial part of the algorithm is the part that brings about the advantage intelligence (see planqk.de). ICSEW’20, May 23–29, 2020, Seoul, Republic of Korea Gabor et al.

REFERENCES [28] Iordanis Kerenidis and Anupam Prakash. 2016. Quantum recommendation sys- [1] [n.d.]. HHL implementation. https://github.com/Qiskit/qiskit-iqx-tutorials/blob/ tems. arXiv:1603.08675v3 (2016). master/qiskit/advanced/aqua/linear_systems_of_equations.ipynb. [29] Amir Khoshaman, Walter Vinci, Brandon Denis, Evgeny Andriyash, Hossein [2] [n.d.]. QAOA implementation. https://pennylane.ai/qml/app/tutorial_qaoa_ Sadeghi, and Mohammad H Amin. 2018. Quantum variational autoencoder. maxcut.html. arXiv:1802.05779v2 (2018). [3] [n.d.]. QGAN Pennylane implementation. https://pennylane.ai/qml/app/tutorial_ [30] James King, Masoud Mohseni, William Bernoudy, Alexandre Fréchette, Hos- QGAN.html. sein Sadeghi, Sergei V Isakov, Hartmut Neven, and Mohammad H Amin. 2019. [4] [n.d.]. QGAN Qiskit implementation. https://github.com/Qiskit/qiskit-iqx- Quantum-Assisted Genetic Algorithm. arXiv:1907.00707 (2019). tutorials/blob/master/qiskit/advanced/aqua/artificial_intelligence/qgans_for_ [31] Frank Leymann, Johanna Barzen, and Michael Falkenthal. 2019. Towards a Plat- loading_random_distributions.ipynb. form for Sharing Quantum Software. Proceedings of the 13th Advanced Summer [5] [n.d.]. SVM implementation. https://github.com/Qiskit/qiskit-iqx-tutorials/blob/ School on Service (2019), 70–74. master/qiskit/advanced/aqua/artificial_intelligence/qsvm_classification.ipynb. [32] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. 2013. Quantum algorithms [6] Esma Aïmeur, Gilles Brassard, and Sébastien Gambs. 2007. Quantum clustering for supervised and unsupervised machine learning. arXiv:1307.0411 (2013). algorithms. Proceedings of the 24th int’l conference on machine learning (2007). [33] Seth Lloyd and Christian Weedbrook. 2018. Quantum generative adversarial [7] DJ Albers, JC Sprott, and WD Dechert. 1996. Dynamical behavior of artificial learning. arXiv:1804.09139 (2018). neural networks with random weights. Intelligent Engineering Systems Through [34] Andrew Lucas. 2014. Ising formulations of many NP problems. Frontiers in Artificial Neural Networks 6 (1996), 17–22. 2 (2014), 5. [8] Md Zahangir Alom, Tarek M Taha, Chris Yakopcic, Stefan Westberg, Paheding [35] A Manju and Madhav J Nigam. 2014. Applications of quantum inspired compu- Sidike, Mst Shamima Nasrin, Mahmudul Hasan, Brian C Van Essen, Abdul AS tational intelligence: a survey. Artificial Intelligence Review 42, 1 (2014), 79–156. Awwal, and Vijayan K Asari. 2019. A state-of-the-art survey on deep learning [36] Catherine C McGeoch. 2014. Adiabatic quantum computation and quantum theory and architectures. Electronics 8, 3 (2019), 292. annealing: Theory and practice. Synthesis Lectures on Quantum Computing 5, 2 [9] Mohammad H Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and (2014), 1–93. Roger Melko. 2018. Quantum boltzmann machine. arXiv:1601.02036 (2018). [37] Catherine C McGeoch and Cong Wang. 2013. Experimental evaluation of an [10] Dario Amodei and Danny Hernandez. 2018. AI and Compute. https://openai. adiabiatic quantum system for combinatorial optimization. In Proceedings of the com/blog/ai-and-compute/. ACM International Conference on Computing Frontiers. 1–11. [11] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, [38] Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. 2018. Data- and Dan Mané. 2016. Concrete problems in AI safety. arXiv:1606.06565 (2016). efficient hierarchical reinforcement learning. In Advances in Neural Information [12] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Rami Processing Systems. 3303–3313. Barends, Rupak Biswas, Sergio Boixo, Fernando GSL Brandao, David A Buell, et al. [39] Florian Neukart, David Von Dollen, Christian Seidel, and Gabriele Compostella. 2019. Quantum supremacy using a programmable superconducting processor. 2018. Quantum-enhanced reinforcement learning for finite-episode games with Nature 574, 7779 (2019), 505–510. discrete state spaces. arXiv:1708.09354v3 (2018). [13] Lenz Belzner, Michael Till Beck, Thomas Gabor, Harald Roelle, and Horst Sauer. [40] Oscar Nierstrasz, Marcus Denker, Tudor Gîrba, Adrian Lienhard, and David 2016. Software engineering for distributed autonomous real-time systems. In Röthlisberger. 2008. Change-enabled software systems. In Software-Intensive Proceedings of the 2nd International Workshop on Software Engineering for Smart Systems and New Computing Paradigms. Springer, 64–79. Cyber-Physical Systems. ACM, 54–57. [41] PennyLane. [n.d.]. VQE implementation. https://pennylane.ai/qml/demos/ [14] Tomas Bures, Danny Weyns, Bradley Schmer, Eduardo Tovar, Eric Boden, Thomas tutorial_vqe.html. Gabor, Ilias Gerostathopoulos, Pragya Gupta, Eunsuk Kang, Alessia Knauss, et al. [42] Alejandro Perdomo-Ortiz, Marcello Benedetti, John Realpe-Gómez, and Rupak 2017. Software Engineering for Smart Cyber-Physical Systems: Challenges and Biswas. 2018. Opportunities and challenges for quantum-assisted machine learn- Promising Solutions. ACM SIGSOFT Software Eng. Notes 42, 2 (2017), 19–24. ing in near-term quantum computers. Quantum Science and Technology 3, 3 [15] Radu Calinescu, Carlo Ghezzi, Marta Kwiatkowska, and Raffaela Mirandola. 2012. (2018), 030502. Self-adaptive software needs quantitative verification at runtime. Commun. ACM [43] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, 55, 9 (2012), 69–77. Peter J Love, Alán Aspuru-Guzik, and Jeremy L OâĂŹbrien. 2014. A variational [16] Giuseppe Cuccu, Julian Togelius, and Philippe Cudré-Mauroux. 2019. Playing eigenvalue solver on a photonic quantum processor. Nature commun. 5 (2014). atari with six neurons. In Proceedings of the 18th international conference on [44] Riccardo Porotti, Dario Tamascelli, Marcello Restelli, and Enrico Prati. 2019. autonomous agents and multiagent systems. International Foundation for Au- Coherent transport of quantum states by deep reinforcement learning. Commu- tonomous Agents and Multiagent Systems, 998–1006. nications Physics 2, 1 (2019), 1–9. [17] Pierre-Luc Dallaire-Demers and Nathan Killoran. 2018. Quantum generative [45] John Preskill. 2018. Quantum Computing in the NISQ era and beyond. adversarial networks. arXiv:1804.08641v2 (2018). arXiv:1801.00862v3 (2018). [18] Daoyi Dong, Chunlin Chen, Hanxiong Li, and Tzyh-Jong Tarn. 2008. Quantum [46] Jonathan Romero and Alan Aspuru-Guzik. 2019. Variational quantum generators: reinforcement learning. arXiv:0810.3828 (2008). Generative adversarial quantum machine learning for continuous distributions. [19] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. 2014. A quantum approxi- arXiv:1901.00848 (2019). mate optimization algorithm. arXiv:1411.4028 (2014). [47] Jonathan Romero, Jonathan P Olson, and Alan Aspuru-Guzik. 2017. Quantum [20] Thomas Fösel, Petru Tighineanu, Talitha Weiss, and Florian Marquardt. 2018. autoencoders for efficient compression of quantum data. arXiv:1612.02806v2 Reinforcement learning with neural networks for quantum feedback. Physical (2017). Review X 8, 3 (2018), 031084. [48] Sukin Sim, Evan Anderson, Eric Brown, and Jonathan Romero. [n.d.]. Autoen- [21] Thomas Gabor, Marie Kiermeier, Andreas Sedlmeier, Bernhard Kempter, Cornel coder implementation. https://github.com/hsim13372/QCompress-1. Klein, Horst Sauer, Reiner Schmid, and Jan Wieghardt. 2018. Adapting quality as- [49] Richard Sutton. 2019. The Bitter Lesson. http://www.incompleteideas.net/ surance to adaptive systems: the scenario coevolution paradigm. In International IncIdeas/BitterLesson.html. Symposium on Leveraging Applications of Formal Methods. Springer, 137–154. [50] The AlphaStar team. 2019. AlphaStar: Mastering the Real-Time Strategy Game [22] Thomas Gabor, Andreas Sedlmeier, Thomy Phan, Fabian Ritz, Marie Kiermeier, StarCraft II. https://deepmind.com/blog/article/alphastar-mastering-real-time- Lenz Belzner, Bernhard Kempter, Cornel Klein, Horst Sauer, Reiner Schmid, strategy-game-starcraft-ii. Jan Wieghardt, Marc Zeller, and Claudia Linnhoff-Popien. 2020. The Scenario [51] Danny Weyns. 2017. Software engineering of self-adaptive systems: an organised Co-Evolution Paradigm: Adaptive Quality Assurance for Adaptive Systems. In- tour and future challenges. (2017). ternational Journal on Software Tools and Technology Transfer (2020). [52] Nathan Wiebe, Ashish Kapoor, and Krysta Svore. 2014. Quantum algo- [23] Fred Glover, Gary Kochenberger, and Yu Du. 2018. A tutorial on formulating and rithms for nearest-neighbor methods for supervised and unsupervised learning. using qubo models. arXiv preprint arXiv:1811.11538 (2018). arXiv:1401.2142v2 (2014). [24] Aram W Harrow, Avinatan Hassidim, and Seth Lloyd. 2009. Quantum algorithm [53] Nathan Wiebe and Leonard Wossnig. 2019. Generative training of quantum for linear systems of equations. arXiv:0811.317v3 (2009). Boltzmann machines with hidden units. arXiv:1905.09902 (2019). [25] Vojtěch Havlicek, Antonio D Córcoles, Kristan Temme, Aram W Harrow, Abhinav [54] D. Willsch, M. Willsch, H. De Raedt, and K. Michielsen. 2019. Support vector Kandala, Jerry M Chow, and Jay M Gambetta. 2018. Supervised learning with machines on the D-Wave quantum annealer. arXiv:1906.06283 (2019). quantum-enhanced feature spaces. arXiv:1804.11326v2 (2018). [55] Stephen Wolfram. 2018. Buzzword Convergence: Making Sense of Quantum [26] Matthias Hölzl, Nora Koch, Mariachiara Puviani, Martin Wirsing, and Franco Neural Blockchain AI. https://writings.stephenwolfram.com/2018/04/buzzword- Zambonelli. 2015. The ensemble development life cycle and best practices for convergence-making-sense-of-quantum-neural-blockchain-ai/. collective autonomic systems. In Software Engineering for Collective Autonomic [56] Christa Zoufal, Aurélien Lucchi, and Stefan Woerner. 2019. Quantum gen- Systems. Springer, 325–354. erative adversarial networks for learning and loading random distributions. [27] Tadashi Kadowaki and Hidetoshi Nishimori. 1998. Quantum annealing in the arXiv:1904.00043 (2019). transverse Ising model. Physical Review E 58, 5 (1998), 5355. All URLs have been accessed on March 1, 2020.