An Exploration of Quantum Theory and its Applications

Khaled C. Allen Department of Mathematics

Defense Date: October 29, 2020

Undergraduate Honors Thesis University of Colorado Boulder

Thesis Advisor: Dr. Divya Vernerey, Department of Mathematics

Defense Committee: Dr. Divya Vernerey, Department of Mathematics Dr. Nathaniel Thiem, Department of Mathematics Dr. Graeme Baird Smith, Department of Physics 1 Abstract

Quantum computation has provided algorithms that outperform classical algorithms in a variety of fields, and recently, has been added to that list [7]. It has been shown that a quantum dominates classical strategies in examples such as a simple coin-flip game [7] and the prisoner’s dilemma [5]. Eisert, Wilkens, and Lewenstein have provided a model to quantize general 2-player [5]. Dahl and Landsburg explore the application of quantum randomization devices to player strategies in classical two-person games [4]. Many problems in quantum computer science, such as Grover’s search and Simon’s prob- lem, can be modeled as games [7], and the quantum algorithms that optimally solve them seen as player moves in a game state. Additionally, error correction [7] and cryptographic problems lend themselves well to modeling as games [10]. In many of these cases, modeling the problem as a game enables actors to overcome inefficiencies caused by lack of communication, as in the prisoner’s dilemma [5] and in a simple cooperative proposed for the GHZ Paradox [6]. Since solutions to problems can be translated into games, this begs the question of whether modeling more problems as games can lead to the construction of an algorithm to solve them. Additionally, the potential for quantum systems to provide new means to coordinate action merits further exploration. This thesis is both an introduction to, and a review of some of the existing literature on, quantum game theory. As such, we provide additional details of the computations done in the papers under review in hopes of elucidating the overall process of constructing and analyzing quantum games. Also, we develop and present a new game that extends [7], and applies analytic techniques from [5]. A Q# programming language implementation of this new game is also provided.

1 Contents

1 Abstract 1

2 Background4 2.1 Overview...... 4

3 General Theory of Quantum Games5 3.1 Formal Description of a Quantum Game...... 5 3.1.1 Quantum Moves...... 6 3.1.2 Payout Functions...... 7 3.2 Game Theory...... 7 3.2.1 Nash Equilibria...... 8 3.2.2 Pure and Mixed Strategies...... 8

4 Examples of Quantum Games9 4.1 PQ Penny Flip...... 9 4.1.1 Setup of the Quantum Coin Flip Game...... 9 4.1.2 The Classical Deterministic Strategy...... 10 4.1.3 The Classical Probabilistic Strategy...... 10 4.1.4 Quantum Strategy, Picard’s Classical Moves...... 10 4.1.5 Optional: Calculating Payout...... 11 4.1.6 Quantum Strategy, Q’s Quantum Moves...... 11 4.1.7 Quantum Strategy Analysis...... 12 4.2 Prisoner’s Dilemma...... 12 4.2.1 The Quantum Prisoner’s Dilemma...... 13 4.2.2 Available Moves...... 13 4.2.3 Measurements...... 13 4.2.4 Taking a Turn...... 14 4.2.5 Calculating Payout...... 14 4.2.6 The General Case...... 15 4.2.7 Calculating the Trace...... 15 4.2.8 General Payout...... 16 4.2.9 A New ...... 17 4.3 Zero-Sum Simultaneous PQ Penny Flip...... 17 4.3.1 Classical Strategic Space...... 19 4.3.2 Restricted Quantum Strategic Space...... 19 4.3.3 The Full Bloch Sphere Strategic Space...... 19 4.3.4 Simultaneous PQ Penny Flip with PD Starting State and Move Set. 22

5 A Quantum Game Theoretic Approach to a Communication Problem 25 5.1 Classical Strategies Fail...... 25 5.2 Quantum Strategy...... 26

6 Conclusion and Extensions 27

7 Acknowledgements 28

2 A Fundamentals of 29 A.1 ...... 29 A.2 Diract Notation and Vector Notation...... 29 A.3 The Math of Measurement...... 31 A.4 Operations on Qubits...... 32 A.5 Tensor Products and Registers...... 33

B Q# Implementation of Simultaneous PQ Penny Flip 34

C Mathematica Notebooks for Calculations and Payout Graphs 35

D Bibliography 44

3 2 Background

Since its formalization by von Neumann and Morgenstern [8], game theory has proven a valuable tool in modeling phenomena in economics, computer science, physics, biology, and others [7, 10]. As these fields have progressed, they have had to contend with the quantum nature of phenomena on small scales. Thus, game theory stands to be generalized to deal with the nature of information exchange and feedback on quantum scales [7]. Quantum computation has provided algorithms that outperform classical algorithms in a variety of fields, and recently, game theory has been added to that list [7]. For example, it has been shown that a quantum strategy dominates classical strategies in a simple coin-flip game [7] and the prisoner’s dilemma [5]. Eisert, Wilkens, and Lewenstein have provided a model to quantize general 2-player games [5]. Dahl and Landsburg have explored the application of quantum randomization devices to player strategies in classical two-person games [4]. Many problems in quantum computer science can be modeled as games, such as Grover’s search and Simon’s problem [7], and the quantum algorithms that optimally solve them can be seen as player moves in a game state. Additionally, error correction [7] and cryptographic and communication problems (the GHZ1 puzzle at the end of this paper being an example of the latter) lend themselves well to modeling as games [10, 12]. Since quantum algorithm solutions to problems can be translated to games, this begs the question of whether modeling problems as games can lead to the construction of an algorithm to solve them.

2.1 Overview This thesis is primarily concerned with three goals: 1. compiling and demonstrating a general theory for the formulation and analysis of quantum games,

2. examining several illustrative examples of quantum game theory, and 3. developing a quantum game in order to analyze it using established methods and model it using a quantum computing language, Q#. Additionally, we briefly explore how modeling a communication problem as a quantum game provides a fruitful method of solving it. The general theory for quantum games comes from various sources in the literature, particularly [5,7,2]. Each has provided some exploration of fundamental quantum game theoretic principles, and we construct a cohesive, universal theory that applies to all the games under analysis. Then, we explore specific quantum games: the PQ Penny Flip developed by [7]; the Quantum Prisoner’s Dilemma developed by [5]; and an extended version of the PQ Penny Flip we have modified to explore implications for a simple zero-sum quantum game. In each case, we provide details of the calculations that have been left out of the published works to elucidate the underlying process of calculating game theoric equilibria, as well as to facilitate

1Named after Daniel Greenberger, Michael Horne, and Anton Zeilinger, who discovered the related paradox in the late 1980s [6].

4 modeling games with Q#. We also present a Q# implementation of the Simultaneous PQ Penny Flip that can be run and played on a classical computer. The remainder of the paper examines how situations that are not generally thought of as games can be modeled as such, specifically looking at the GHZ Paradox coordination challenge from the perspective of quantum game theory. The goal is to determine if viewing these problems through the lens of a game may have suggested the approaches used to solve them, or suggests alternate approaches. Appendices contain some background information on the quantum computing theory necessary to understand the topics presented. Additionally, the code for the Q# implemen- tation of the Simultaneous PQ Penny Flip and the Mathematica code used for generating the graphs in the paper are provided in the appendices.

3 General Theory of Quantum Games

A classical game consists of a set of players, a set of actions each player may take, and a preference relation that ranks the game’s outcomes for each player.

Definition 3.1. A game consists of the set, N, of the players, a set Ai of actions each player may take for i ∈ N, and a preference relation, ≥, for each player on the set A = ×i∈N Ai[9]. The preference relation can be thought of as a payoff function by allowing player i to rank the various outcomes of the game. Eisert provides a straightforward definition of a quantum game: “Any quantum system which can be manipulated by two parties or more and where the utility of the moves can be reasonably quantified, may be conceived of as a quantum game” [5]. In conceptual terms, a quantum game is a quantum system (see AppendixA) that the players can manipulate, and which has certain payouts to each player associated with particular end states. Each player applies their move to the quantum system, in the form of a unitary matrix, and attempts to ensure that the system ends in the state that provides the highest payout to them, constrained by game theoretic concepts such as equilibria and optima.

3.1 Formal Description of a Quantum Game Extending the definition of a quantum game given by [5] from 2 to n players, Definition 3.2. a game Γ of n players is specified by its components

Γ = (H, ρ, S1,...,Sn,P1,...Pn), (1) where •H is the n-dimensional Hilbert space representing the quantum system used for com- putation, • ρ is the n-qubit starting state with ρ ∈ S(H), where S(H) is the space of all possible states of the quantum system (consisting of unit vectors in H),

• S1,S2,...,Sn are sets of permissible quantum moves, trace-preserving maps (see sec- tion 3.1.1), for each player, and

5 • P1,P2,...,Pn are utility functionals with each

Pi : S1 × ... × Sn → R

(s1, ··· , sn) 7→ Pi(s1, ··· , sn) that compute the payout at the end of the game for each player i. Note that this definition does not explicitly specify a set of players. However, the set of players is implied in the n-qubit size of the starting state ρ. Player i applies their strategy, si, selected from permissible quantum strategies in Si. Each strategy si has an associated unitary Ui, which acts by conjugation on the i-th qubit of the starting state. Thus, to compute the end state after all players apply their strategies to ρ, we conjugate the starting state by the tensor product of the associated unitaries of all players’ strategies (in order), so that the final state σ at the end of the game is given by

† σ = (s1 ⊗ · · · ⊗ sn)(ρ) = (U1 ⊗ · · · ⊗ Un)ρ(U1 ⊗ · · · ⊗ Un) . (2)

See appendixA for details on the mechanics of tensor products, conjugation, and the conjugate transpose as used here (†).

3.1.1 Quantum Moves We start by defining the basic terms that characterize quantum moves: trace-preserving and unitary. Definition 3.3. A trace-preserving map in the context of a quantum game is a complex- valued matrix A such that, for all complex-valued matrices B

tr[AB] = tr[B] = tr[BA], where the trace of X, tr[X], is the sum of the elements on the main diagonal of X. Definition 3.4. A complex-valued matrix A is unitary if AA† = I = A†A, where A† is the conjugate transpose of A. Then, the quantum move set for all players is selected from the set of trace-preserving maps. For example, [5] use the two-parameter unitary

 iφ θ θ  e cos 2 sin 2 U(θ, φ) = θ −iφ θ (3) − sin 2 e cos 2 for the prisoner’s dilemma, while the Simultaneous PQ Penny Flip we develop in this thesis uses  −iφ/2 θ iφ/2 θ  e cos 2 −e sin 2 U(θ, φ) = −iφ/2 θ iφ θ , (4) e sin 2 e cos 2 with θ ∈ [0, π], φ ∈ [0, π/2] for both. Note that (4) corresponds to a composition of rotation of a single qubit’s state vector about the z-axis and then about the y-axis in a two dimensional complex vector space (see appendix A.1 for more details on this model). This move set was chosen specifically to be consistent with how general operations are implemented in Q#.

6 3.1.2 Payout Functions With this move set, we can consider the payout function. Using the same convention used by [5], let

πs1,...,sn = |ψs1,...,sn i hψs1,...,sn | (5) be the state of the game corresponding to strategies s1, . . . , sn having been played (see Appendix Section A.3 for details on the |xi hx| notation). These π are the meaningful results of the game. (So, for example, we might designate πH and πT to indicate a quantum coin being in the heads or tails state respectively, but not an in-between state, though a player could certainly play an in-between state. However, if we wanted to consider an in-between state meaningful, we could as well.) Then

tr[πs1,...,sn σ] (6) is the result of measuring the end state σ with regards to the πs1,...,sn basis (see Appendix A for details on this method of measurement of a quantum system). In effect, what we get from (6) is the probability of measuring σ in state πs1,...,sn , keeping in mind that the players do not need to play exactly the set of strategies associated with πs1,...,sn for the state to be 2 measured in πs1,...,sn .

Then in the payout function of each player A, each πs1,...,sn has an associated weight coefficient, As1,...,sn . For example, the value to Player A of a particular selection of strategies (s1, . . . , sn) by each player is

† As1,...,sn tr[πs1,...,sn σ] = As1,...,sn tr[|ψs1,...,sn i hψs1,...,sn | (U1 ⊗· · ·⊗Un)ρ(U1 ⊗· · ·⊗Un) ]. (7) Generalizing [5]’s two player payout function, we can state the definition of a payout function. Definition 3.5. The payout function for Player X is the sum over all possible combina- tions of played strategies of the result of measuring the end state, times the corresponding coefficients which indicate how much the resulting outcome is worth to player X, given by X X X P = ··· X tr[π σ] X si1 ,...,sin si1 ,...,sin i1 i2 in X (8) = X tr[π σ]. si1 ,...,sin si1 ,...,sin i1,...,in As it will be useful in the coming analysis, in a two-player game with two possible moves 1 and 2 for each player, the payout for player X is given by

PX = X11tr[π11σ] + X12tr[π12σ] + X21tr[π21σ] + X22tr[π22σ]. (9)

3.2 Game Theory Here, we present two important concepts of game theory used throughout the text.

2 In a quantum system, measuring the state effects the state, so if we measure it in state πs1,...,sn , it will then be found to be in state πs1,...,sn after all subsequent measurements. This is quite different from classical computer science, where we can inspect a bit or bit register as often as we like without altering its state. While this may seem like a handicap, “it’s not a bug. It’s a feature,” and we will use this property in Section5 as part of the strategy to solve the puzzle. The philosophical implications of this behavior of quantum systems are interesting, but not relevant to this paper.

7 3.2.1 Nash Equilibria We are primarily concerned with finding equilibria states in the games under analysis. In a classical game, a Nash equilibrium is the set of actions for each player in which no player can improve their payout by unilaterally changing their strategy.

Definition 3.6. Let G = (N,Ai, ≥) be a classical game. A Nash equilibrium is a set of ? ? ? ? actions a for every player i such that (ai , aj ) ≥ (ai, aj ) for all j 6= i [9]. ? That is, given that every other player j plays their best strategy aj , player i’s strategy ? ai is preferable to any other strategy player i may play, and this is true for all players. The word action in the definition is used to indicate a strategy that consists of a single, specific action, as opposed to a mix of actions based on a probability distribution. We use strategy throughout the text to indicate both strategies that consist of a single action and strategies that involve a probabilistic mix of multiple actions. This definition translates easily to the quantum case.

Definition 3.7. Let Γ = (H, ρ, S1,...,Sn,P1,...Pn) be a quantum game. A quantum ? ? ? ? Nash equilibrium is a set of strategies s for every player i such that (si , sj ) ≥ (si, sj ) for all j 6= i. In general, a game may have more than one Nash equilibrium, and the game can be expected to fall into one of them, such that players (who are assumed to be perfectly rational), will play their strategies corresponding to the Nash equilibrium, which may not be optimal strategies. For example, in the prisoner’s dilemma, we can see the payoff function as a table (Table 1).

2 C D 1 C 3,3 0,5 D 5,0 1,1

Table 1: The payout table for the classical Prisoner’s Dilemma, reproduced from [5]. Player 1’s moves are along the left side and Player 2’s moves are along the top.

Player 1’s best action is D, since by playing D, their lowest payout is 1 and their highest is 5 which beats the alternative of a lowest of 0 and a highest of 3. The same is true for Player 2. Thus, (D,D) ≥ (C,D) for Player 1 and (D,D) ≥ (D,C) for Player 2, so (D,D) is the Nash equilibrium. Note the preference relation of the game leads to the fact that, in a competitive game, each player will choose a strategy to maximize their minimum payout. If there are two players, and the loss of Player A is the gain of Player B, we see this as

max min(PA) = min max(PB). (10) SA SA SB SB This was proven early on for classical games by [8] and for quantum games by [2].

3.2.2 Pure and Mixed Strategies

In a classical game, the set of probability distributions over Ai is called the set of mixed strategies, and in contrast, a member of Ai is called a pure strategy [9]. Conceptually, a

8 mixed strategy corresponds to assigning to each possible action a probability of play (in Rock-Paper-Scissors, for example, assuming your opponent is perfectly random, your best strategy is to play a mixed strategy of playing rock one third of the time, paper one third of the time, and scissors one third of the time). A pure strategy is a single specific action. Note that [9] defines the mixed extension of a game, which has as its set of actions the mixed strategies created by taking the probability distributions over Ai. In a sense, it doesn’t make sense to play a mixed strategy in a single game, since for any given move, a player has to pick a single action, but a mixed strategy relies on the player’s ability to play actions over many rounds. Every finite strategic game has a mixed strategy Nash equilibrium [9], and [5] shows that for every quantum game, there exist mixed quantum strategy Nash equilibria.

4 Examples of Quantum Games

This section presents three quantum games: a very simple penny-flipping game by [7] in which only one of the players can make a quantum move; a quantum version of the Prisoner’s Dilemma by [5] in which the use of a quantum move set enables the players to overcome the dilemma; and a new two-person zero-sum coin flip game we developed in the process of writing this thesis. The first two games are presented to introduce the concepts of quantum game theory in a concrete way, as well as to familiarize the reader with the concepts and methods that will be used to construct and analyze the third game.

4.1 PQ Penny Flip The PQ Penny Flip is a very simple game developed by [7] to elucidate some of the most basic features of a quantum game. It is a good example because it is simple and only introduces quantum strategies for one of the players, while still illustrating the interesting results that come from playing a game in a quantum space.

4.1.1 Setup of the Quantum Coin Flip Game The starship Enterprise is facing some imminent and apparently inescapable calmity when Q arrives and offers to help, but only if Captain Picard can beat him in a simple coin flip game. The rules are as follows:

• A quantum coin is placed heads up in a quantum box • They take turns flipping or not flipping the coin, without revealing the coin or the move: – Q chooses whether to flip the coin or not – Picard chooses whether to flip the coin or not – Q again chooses whether to flip the coin or not

• If the coin comes up heads, Q wins. Otherwise, Picard wins.

9 4.1.2 The Classical Deterministic Strategy As [7] points out, from a classical perspective, Picard has a 50/50 chance of winning. This can be seen in Table2, which shows Picard’s payoff for every possible combination of moves. Picard’s moves are notated on the left side, and each of Q’s two-move combinations is shown along the top, where N means “no flip” and F means “flip”. So if Q chooses to flip, Picard choose no flip, and Q chooses flip, we reference the cell at the intersection of row N and column FF , which has payout −1, since the coin ends heads face up and so Picard loses.

Q NN NF FN FF P N -1 1 1 -1 F 1 -1 -1 1

Table 2: Payout table for the PQ Penny Flip, reproduced from [7]. Picard’s moves are on the left side, and Q’s are along the top.

Thus there is no pure deterministic Nash equilibrium: no matter what strategy Q chooses, Picard may improve his payoff by changing his strategy while Q does not, and vice versa.

4.1.3 The Classical Probabilistic Strategy Picard’s best classical outcome is obtained by using a mixed strategy: flip the coin with 1 1 probability p = 2 . Q’s is likewise to choose one of his four options with p = 4 [7]. Expected payoff is then 0 for both players, and no change in strategy will improve payoff for either player if the other does nothing, so this is a mixed Nash equilibrium. So, Picard implements this strategy. However, Q wins. And proceeds to win the next 9 games.3 It is hard to claim that Q is playing fair, since he is using a move set that Picard doesn’t even have access to, but we can demonstrate that he is not “breaking any laws of physics” to do so.

4.1.4 Quantum Strategy, Picard’s Classical Moves To see what Q is doing, we follow [7]’s presentation of the game, with some added details. 1 0 Define a vector space with basis |Hi = ( 0 ) , |T i = ( 1 ). Then pure classical strategies, expressed as operators on this space, are

0 1 1 0 F := ,N := . (11) 1 0 0 1

1 0 Then F |Hi = F = = |T i and N |Hi = |Hi. Similarly, a mixed strategy of “flip 0 1 the coin with probability p” is a stochastic matrix

1 − p p  pF + (1 − p)N = . (12) p 1 − p

3In [7], the explanation is that Picard negotiates an extra game on the basis that he only got one move to Q’s two. Nonetheless, Q is able to win 100% of the games played.

10 Assuming the 50/50 optimal strategy Picard uses, we can see the expected end state as

.5 .5 1 .5 = = 0.5(|Hi + |T i). (13) .5 .5 0 .5

4.1.5 Optional: Calculating Payout While it is a simple matter to calculate payout in this game, I included this section to demonstrate how payout is calculated in general and to show that the method applies even to this very simple game with a classical player. Referencing (7,8), let the possible states of the coin upon measurement be expressed as  !  1 0 |Hi hH| = ,Q1 ⊕ P ⊕ Q2 = 0  0 0 πQ1⊕P ⊕Q2 ! (14)  0 0 |T i hT | = ,Q1 ⊕ P ⊕ Q2 = 1  0 1 where Q1,Q2 are Q’s first and second moves respectively (not-flip, flip) and P is Picard’s move, with Qi,P = 1 if the move is to flip, and Qi,P = 0 otherwise. AQ1PQ2 = −1+2(Q1 ⊕ P ⊕ Q2) as given by the corresponding entry in table2. Then, Picard’s payout is given by the trace of the matrix π1σ minus the trace of the matrix π0σ. Using a modification of (9), with all A◦ = 0 except AH = −1,AT = 1, we have Picard’s payout function as

Pp = tr[π1σ] − tr[π0σ] (15) a where σ, in preparation for the quantum case, is a matrix generated by a b = b a2 ab . ab b2 0.25 0.25 In the mixed strategy case, σ = and tr[π σ] = tr[π σ] = 1 so P = 0 as 0.25 0.25 0 1 4 p expected.

4.1.6 Quantum Strategy, Q’s Quantum Moves If the coin is a quantum system, we represent it in Dirac notation as α |Hi + β |T i, where α, β are complex scalars. Then a quantum move is represented by a unitary matrix

a b  U = (16) b −a where a is the complex conjugate of a. 1 1  If Q chooses both moves as U = U = √1 , then the effect on |Hi is 1 3 2 1 −1 1 1 1  1 1 1 1 U1 |Hi = √ = √ = √ (|Hi + |T i) (17) 2 1 −1 0 2 1 2

11 and 1 1 1  1 1 1  1 U3U1 |Hi = √ √ 2 1 −1 2 1 −1 0 1 1 1  1 = 2 1 −1 1 1 2 = 2 0 = |Hi .

So if the coin starts in a pure state |Hi we can see what happens if Q applies the unitaries above: 1 1 U1 |Hi = √ (Q’s first move) 2 1 1 1 1 1 [pF + (1 − p)N]U1 |Hi = √ pF + √ (1 − p)N 2 1 2 1 1 1 1 1 = √ p + √ (1 − p) (Picard’s move) 2 1 2 1 1 1 1 1 U3[pF + (1 − p)N]U1 |Hi = U3 √ p + U3 √ (1 − p) 2 1 2 1 = p |Hi + |Hi − p |Hi = |Hi . (Q’s final move)

Thus, by playing U1 and U3, Q is able to ensure that the coin ends in the |Hi state every time.4

4.1.7 Quantum Strategy Analysis Q’s first move puts the state of the coin into a simultaneous +1 eigenstate of both F and N, so nothing that Picard does to it changes its superposition. In the Bloch sphere model (see Appendix Section A.1), Q moves the qubit vector to be along the x-axis. Picard’s move set is restricted to rotations about the x-axis, so he just rotates the qubit vector about itself, essentially leaving it untouched. Q’s second move is simply to apply the unitary, which is its own self-inverse, thus recov- ering the |Hi state and guaranteeing a win.

4.2 Prisoner’s Dilemma The prisoner’s dilemma is a classic two-person zero-sum game used to illustrate a situation in which the players will be incentivized to choose a non-optimal strategy. The game is traditionally formulated as follows. Two players, Alice and Bob, are taken into custody by the police and separately ques- tioned. Prior to being apprehended, they agreed to both keep quiet. During the questioning, they may choose to cooperate with the agreement and keep quiet, or they may choose to

4Meyer used density matrices and conjugation of states but here we use [6]’s simpler notation. We will demonstrate the density/trace notation in the next example.

12 defect, ratting out their partner. If both cooperate, the police can only get them on minor charges. If they both defect, they both go to prison. If one defects and the other cooperates, the defector gets off free and the cooperator goes to prison for a long time. The payout table is given in Table3, with C being the strategy Cooperate and D being the strategy Defect.

B C D A . C 3,3 0,5 D 5,0 1,1

Table 3: Payout table for the prisoner’s dilemma reproduced from [5]. Alice’s moves are along the left side and Bob’s are along the top.

In single-round play, there is a Nash equilibrium at the strategy DD (both defect), since at that point, neither Alice nor Bob can improve their payout by unilaterally altering their strategy. In a quantum version of this game, however, it is possible to defeat the dilemma, as demonstrated by [5].

4.2.1 The Quantum Prisoner’s Dilemma For this game, [5] propose the starting state 1 ρ = |ψi hψ| , |ψi = √ [|00i + i |11i] (18) 2 which is a maximally entangled state. Players Alice and Bob apply their strategies, sA, sB, selected from permissible quantum strategies in SA = SB = S. The final state σ at the end of the game is given by

† σ = (sA ⊗ sB)(ρ) = (UA ⊗ UB)ρ(UA ⊗ UB) . (19)

Finally, the payoff functionals are also equal PA = PB = P so that the game is entirely symmetric.

4.2.2 Available Moves The quantum move set for both players is given by the two-parameter unitary U(θ, φ) as in equation (3). Conventional moves C and C are included in this set as with the corresponding unitaries UC and UD given by

1 0  0 1 C : U = U(0, 0) = ,D : U = U(π, 0) = . (20) C 0 1 D −1 0

4.2.3 Measurements Taking a measurement of the end state σ in the CD basis amounts to calculating the trace, for example, of the matrix,

† πCC σ = |ψCC i hψCC | (UA ⊗ UB) |ψCC i hψCC | (UA ⊗ UB)

13 as well as πCDσ, πDC σ, πDDσ which amounts to summing the square absolute value of the coefficients of the respective terms of the end state σ of the game (an example of this calculation is provided in Section 4.2.7). In general, πXY = |ψXY i hψXY | where X,Y ∈ { C,D }, and each possible outcome state is

|ψ i = √1 [|00i + i |11i] CC 2 |ψ i = √1 [|01i − i |10i] CD 2 |ψ i = √1 [|10i − i |01i] DC 2 |ψ i = √1 [|11i + i |00i] DD 2 where πCC = ρ.

4.2.4 Taking a Turn

Suppose Alice cooperates and Bob defects, so sA = C, sB = D. Then we can investigate the form of σ by

† σ = (C ⊗ D)(ρ) = (UC ⊗ UD)ρ(UC ⊗ UD) † = (UC ⊗ UD) |ψi hψ| (UC ⊗ UD)  √1  2  0 1 0 0  0    0 −1 0 0  −1 0 0 0   √1 0 0 √−i 1 0 0 0 = 0 0 0 1   2 2 0 0 0 −1 0 0 −1 0  0  0 0 1 0 √i 2 0 0 0 0 ! 1 −i 0 2 2 0 = −i 1 0 2 2 0 0 0 0 0 1 = [|01i h01| − i |01i h10| − i |10i h01| + |10i h01|] 2 1 1 = √ (|01i − i |10i)√ (h01| + i h10|) = |ψCDi hψCD| = πCD. 2 2

Thus, we see that applying strategies C,D to the starting state puts it in the expected end state, which is the basis state πCD. Thus, if we were to measure this state in the CD basis, it would be found to be in the πCD state 100% of the time.

4.2.5 Calculating Payout Payout is calculated according to

PA(sA, sB) = ACC tr[πCC σ] + ACDtr[πCDσ] + ADC tr[πDC σ] + ADDtr[πDDσ] (21)

PB(sA, sB) = BCC tr[πCC σ] + BCDtr[πCDσ] + BDC tr[πDC σ] + BDDtr[πDDσ] (22)

14 In our example case, all πsA,sB σ = 0 except πCDσ. Then

0 0 0 0 0 1/2 −i/2 0 1 1 tr[πCDσ] = tr[|ψCDi hψCD| |ψCDi hψCD|] = tr   = + = 1. 0 −i/2 1/2 0 2 2 0 0 0 0 (23)

4.2.6 The General Case Applying the general unitary from (3) for each player we see 1 (UA ⊗ UB) |ψCC i = √ [UA |0i UB |0i + iUA |1i UB |1i] 2 1 θA θA = √ [(eiφA cos |0i − sin |1i) 2 2 2 θB θB ⊗ (eiφB cos |0i − sin |1i) 2 2 θA θA + i(sin |0i + e−iφA cos |1i) 2 2 θB θB ⊗ (sin |0i + e−iφB cos |1i)] 2 2 1 θA θB θA θB = √ [(ei(φA+φB ) cos cos + i sin sin ) |00i 2 2 2 2 2 θA θB θB θA + (eiφA cos sin + ie−iφB cos sin ) |01i 2 2 2 2 θB θA θA θB + (−eiφB cos sin + ie−iφA cos sin ) |10i 2 2 2 2 θA θB θA θB + (sin sin + ie−i(φA+φB ) cos cos ) |11i]. 2 2 2 2

4.2.7 Calculating the Trace

To get the payout function, we need to find tr[πCC σ], which [5] provides. However, for clarity, † we demonstrate one of these calculations here. Let σ = (UA ⊗UB) |ψCC i hψCC | (UA ⊗UB) = |ψABi hψAB| .

πCC σ = |ψCC i hψCC | |ψABi hψAB| 1 = |ψCC i √ [h00| − i h11|] |ψABi hψAB| . 2

15 Thus we only care about the |00i and |11i terms in |ψABi. 1 πCC σ = |ψCC i √ [h00| − i h11|] |ψABi hψAB| 2 1 θA θB θA θB = |ψ i [(ei(φA+φB ) cos cos + i sin sin ) CC 2 2 2 2 2 θA θB θA θB −i sin sin + e−i(φA+φB ) cos cos )] hψ | 2 2 2 2 AB 1 θ θ = |ψ i [(cos(φ + φ ) + i sin(φ + φ )) cos A cos B + CC 2 A B A B 2 2 θ θ + (cos(φ + φ ) − i sin(φ + φ ) cos A cos B )] hψ | A B A B 2 2 AB 1 θ θ = |ψ i [(2 cos(φ + φ ) cos A cos B ] hψ | CC 2 A B 2 2 AB 2 θA θB = cos(φA + φB) cos cos . 2 2

4.2.8 General Payout Repeating this process for the other three cases, we can calculate the general payout function as

2 PA(θA, φA, θB, φB) = 3| cos(φA + φB) cos(θA/2) cos(θB/2)| 2 + 5| sin(φA) cos(θA/2) sin(θB/2) − cos(φB) cos(θB/2) sin(θA/2)| 2 + | sin(φA + φB) cos(θA/2) cos(θB/2) + sin(θA/2) sin(θB/2)| (24)

Graphing this payout function as Figure1 motivates [5] to investigate the quantum strategy i 0  Q = U(0, π/2) = 0 −i The plot of the general payout function is given in Figure1. Focusing only on the values of θ, φ that correspond to the strategies D,C and Q, we can extend the strategy table in Table4.

B Q C D A Q 3,3 1,1 5,0 C 1,1 3,3 0,5 D 0,5 5,0 1,1

Table 4: The payout table for the quantum Prisoner’s Dilemma. Alice’s moves are along the left side, and Bob’s are along the top.

16 Figure 1: Using the same parameterization as [5], UA = U(tπ, 0), 0 ≤ t ≤ 1 and UA = U(0, −tπ/2), −1 ≤ t < 0. UB has the same parameterization. Code to generate this graph is provided in AppendixC.

4.2.9 A New Nash Equilibrium The result of this is that, when allowing quantum strategies U(θ, φ), θ ∈ [0, π], φ ∈ [0, π/2], the original Nash equilibrium is replaced by a new one that has a better payout for both parties. Originally, PA(D,D) = 1 ≥ PA(C,D) = 0. Now, however,

i 0  Q = U(0, π/2) = 0 −i is an equilibrium that overcomes the dilemma and enables both Alice and Bob to optimize their payout.

4.3 Zero-Sum Simultaneous PQ Penny Flip Here, we explore a modified version of the PQ Coin Flip game played by Captain Picard and Q in the paper by [7]. Revising the game in this way was motivated by a desire to explore a simple case of a zero-sum quantum game, as well as to apply some of the concepts in the previous sections to a new game. In this version of the game, both Picard and Q are capable of playing a quantum move5.

5In-world, we can say that, after Picard figures out what Q was up to, he asks Engineering to build a quantum coin flipper, which they are able to do, and challenges Q to the game again. Q, curious what Picard has up his sleeve this time, is unable to resist another game, especially when Picard offers even higher stakes.

17 This game, however, has a different payoff function. Both players are provided with a quantum coin. They each play their move on their own coin. The payoff is calculated based on the state of the coins after measurement, according to Table5.

Q H T P H (1,-1) (-1,1) T (-1,1) (1,-1)

Table 5: Payout table for the Simultaneous PQ Penny Flip. Picard’s moves are along the left side and Q’s are along the top.

Thus, Picard wants both coins to be in the same state, and Q wants them to be in different states. It is easy to see that in this discrete case, there is no Nash equilibrium. We use as our starting state the maximally entangled Bell State √1 [|00i + |11i]. Both 2 Picard and Q may choose their move from (4). With this move set, we can consider the payout function, generated by investigating the trace of

† |ψssi hψss| (UP ⊗ UQ) |HHi hHH| (UP ⊗ UQ) (25)

where |ψssi is the state we wish to investigate, with s ∈ {H,T }. So, Picard’s payoff function is given by

PP (θP , φP , θQ, φQ) =            2 φP + φQ θP θQ θP θQ cos cos cos + sin sin 2 2 2 2 2            2 φP + φQ θQ θP θP θQ − sin cos sin + cos sin 2 2 2 2 2            2 φP + φQ θP θQ θQ θP − cos cos sin − cos sin 2 2 2 2 2            2 φP + φQ θP θQ θP θQ + sin sin sin − cos cos (26) 2 2 2 2 2 which simplifies via trigonometric angle sum identities to

PP (θP , φP , θQ, φQ) =     2 φP + φQ θP − θQ cos cos 2 2     2 φP + φQ θQ + θP − sin sin 2 2     2 φP + φQ θP − θQ − − cos sin 2 2     2 φP + φQ θP + θQ + − sin cos (27) 2 2 which is the sum of the squared coefficients of the |00i and |11i terms, minus those of the † |10i and |01i terms in (UP ⊗ UQ) |ψHH i hψHH | (UP ⊗ UQ) .

18 4.3.1 Classical Strategic Space We can begin by investigating the case where both players are restricted to the classical move set U(θ,0), θ ∈ {0,π}. That is, they can each choose to flip or not flip their coin. As expected, Picard using a pure quantum strategy in this restricted (basically classical) setting, has the same payout as he did before.

PP ((0, 0)P , (0, 0)Q) = 1 (28)

PP ((π, 0)P , (π, 0)Q) = 1 (29)

PP ((π, 0)P , (0, 0)Q) = −1 (30)

PP ((0, 0)P , (π, 0)Q) = −1 (31) (32)

4.3.2 Restricted Quantum Strategic Space Then we can consider the case where both players may move U(θ, 0), θ ∈ [0, π]. Here, they can choose to put their respective coins into a superposition of heads and tails. In this case, π Picard can guarantee himself a payout of at least 0 by playing θ = 2 .

PP ((π/2, 0)P , (0, 0)Q) = 0 (33)

PP ((π/2, 0)P , (π, 0)Q) = 0 (34)

PP ((π/2, 0)P , (π/2, 0)Q) = 1 (35)

What is interesting here is not just that Picard ensured a minimum payout of 0. He could have done that with a classical mixed strategy of flipping his coin 50% of the time. However, in this case, nothing Q can play will result in an outright loss for Picard: no matter what Q plays, Picard’s payout remains ≥ 0. This can be seen graphically in Figure 2. This setup introduces two optimal strategies, with Picard’s optimal move clearly U(π/2, 0), and Q’s either U(π, 0) or U(0, 0). This is not, however, a Nash Equilibrium. Also of interest is that now, if both Picard and Q play U(π/2, 0), Picard wins every time (in fact, any UP (θP , 0) and UQ(θQ, 0) with θP = θQ results in a win for Picard). This suggests that, in the quantum case, even the superpositions between heads and tails count as faces for the purposes of determining if Picard wins. The closer the angles are to one another, the better the odds for Picard. This is somewhat surprising because what each π player has essentially done by playing θ = 2 is place their coin in an even superposition between heads and tails, so that when it is measured, it should be equally likely to come up heads or tails. The net effect, one might expect, is that both players are basically tossing their coins, so that any of the four heads-tails combinations should be equally likely. However, what we actually see, possibly due to entanglement, is that the “coin toss” of one coin is linked to the other.

4.3.3 The Full Bloch Sphere Strategic Space Extending the strategy space to include U(θ, φ), θ ∈ [0, π], φ ∈ [0, π/2] gives Q the opportu- nity to increase his minimum payout. Picard’s best move remains the same, confirmed by

19 Figure 2: Picard’s payout function in the simultaneous PQ Penny Flip game. Note that, when UP is halfway between T and H, Picard’s payout is at least 0. inspection of Figure3, but Q can at least avoid an outright loss by π π  U = U , . Q 2 4 This increases Q’s minimmum payout to 1 min(PQ) = − max(PP ) = −√ , 2 which puts him in a better position than before. This can be verified by checking some key strategies and inspecting Figure3:

PP ((π/2, 0)P , (0, 0)Q) = PP ((π/2, 0)P , (0, π/2)Q) = PP ((π/2, 0)P , (π/2, π/2)Q) = 0 (36) 1 PP ((π/2, 0)P , (π/2, π/4)Q) = √ 2 (37)

PP ((π/2, π/4)P , (π/2, π/4)Q) = 0 (38)

Since Picard will seek to maximize his minimum, he will choose a column in Figures6 and7 that has nothing less than 0 in his payout, if he can. The strategy UP (π/2, π/4) does this. Q will likewise maximize his minimum by choosing row UQ(π/2, π/2), which gives a new Nash equilibrium at UP = U(π/2, π/4),UQ = U(π/2, π/2). Tables6 and7 have the corresponding rows highlighted. This game is implemented in Q# in AppendixB.

20 Table 6: Payout table for the extended Simultaneous PQ penny flip with |ψ i = √1 [|00i + |11i]. Picard’s moves are along HH 2 the top and Q’s moves are along the left side.

Table 7: Payout table for the extended Simultaneous PQ penny flip with |ψ i = √1 [|00i + |11i]. Picard’s moves are along HH 2 the top and Q’s moves are along the left side.

21 Figure 3: Extending the Simultaneous PQ Penny Flip reveals a better strategy for Q. Parametarization is given by UP (tπ, 0), t ∈ [0, 1],UP (−tπ, −tπ/2), t ∈ [−1, 0). The parametarization is the same for UQ.

4.3.4 Simultaneous PQ Penny Flip with PD Starting State and Move Set It is worth investigating how a different starting state and move set impact the results of this analysis. To that end, let ρ = |ψ i hψ | , |ψ i = √1 [|00i + i |11i], the same HH TT HH 2 state used by [5] in their modeling of the Prisoner’s Dilemma. Also, let the move set SP = SQ = SA = SB from (3), again the same move set used in [5]. Then Picard’s payout function is given by:

2 PP (θP , φP , θQ, φQ) = | cos(φP + φQ) cos(θP /2) cos(θQ/2)| 2 − | sin(φP ) cos(θP /2) sin(θQ/2) − cos(φQ) cos(θQ/2) sin(θP /2)|

− |sin(φQ) cos(θQ/2) sin(θP /2) − cos(φP ) cos(θP /2) sin(θQ/2)| 2 + | sin(φP + φQ) cos(θP /2) cos(θQ/2) + sin(θP /2) sin(θQ/2)| (39)

In this case we can inspect the payout graphs in Figures4 and5 and the payout tables in Tables8 and9. In the restricted case, Picard’s best move is still UP = U(π/2, 0), since in that case, his payout is a constant 0. Nothing Q can play will alter Picard’s payout. This gives a Nash equilibrium with UQ = U(π/2, 0), and may be interpreted as an actual “coin toss” since, regardless of what Q picks, Picard can delegate the choice of heads or tails to the quantum coin itself and still have a 50/50 chance of winning. This is in contrast to the previous case using |ψ i = √1 [|00i + |11i]. HH 2 In the extended case, Picard’s best move is M = U(π/2, π/4), even though U(π/2, 0) and U(π/2, π/2) still give the same minimum payouts as before. However M gives a better

22 Figure 4: The payout function for the Simultaneous PQ Penny Flip in the restricted quantum space, using the starting state √1 [|00i + i |11i]. 2

Figure 5: The payout function for the Simultaneous PQ Penny Flip in the extended quantum space, using the starting state √1 [|00i + i |11i]. 2

23 Table 8: Payout table for the extended Simultaneous PQ penny flip using the starting state √1 [|00i + i |11i]. Picard’s moves 2 are along the top and Q’s moves are along the left side.

Table 9: Payout table for the extended Simultaneous PQ penny flip using the starting state √1 [|00i + i |11i]. Picard’s moves 2 are along the top and Q’s moves are along the left side.

24 maximum and with Q’s best move as O = U(π/2, 0), M actually has a positive payout for Picard P (( π , π ) , ( π , π ) ) = √1 . However, this is not a Nash equilibria, as Q can improve P 2 4 P 2 2 Q 2 his payout by deviating unilaterally, though Picard cannot. Thus, we can see that the starting state of the game has a significant effect on the equilibria and the value of the game.

5 A Quantum Game Theoretic Approach to a Commu- nication Problem

The GHZ paradox, as explained in [6], is an interesting illustration of a three qubit system that seems to suggest a paradox, when inappropriate assumptions are applied. It was also the topic of an assignment follow up to [12] that illustrated the value of a game theoretic approach to solve a particular communication problem. In essence, a quantum strategy allows three players to coordinate their actions without being able to communicate once the “game” has begun. Thus, it is a good example of how conceiving of a problem as a quantum game can have productive results. The discussion that follows parallels the assignment [1]. The setup is as follows: Alice, Bob, and Charlie are given input bits a, b, and c respec- tively, with the condition that the binary sum of the bits is 0. That is, a ⊕ b ⊕ c = 0. Their goal is to output bits x, y, and z respectively such that the binary sum of the outputs is equal to the logical OR of the inputs: x ⊕ y ⊕ z = a ∨ b ∨ c. They can agree on a strategy in advance but cannot communicate after receiving their inputs. We can conceive of this as a quantum game using

Γ = (H, ρ, SA,SB,SC ,PA,PB,PC ) (40)

1 where ρ = |ψABC i = 2 (|000i − |011i − |101i − |110i) (that is, all states such that the binary sum of the three bits is 0), SA = SB = SC = { U(θ, φ) | θ ∈ [0, π], φ ∈ [0, π/2] } as in (3), and PA = PB = PC = 1 when x ⊕ y ⊕ z = a ∨ b ∨ c, and 0 otherwise.

5.1 Classical Strategies Fail In a classical settings, there are 4 possible inputs that meet the criteria. They are shown, along with the corresponding desired outputs, in the table below.

abc a ∨ b ∨ c xyz (0) 000 0 110, 101, 011, 000 (1) 011 1 111, 100, 001, 010 (41) (2) 101 1 111, 100, 001, 010 (3) 110 1 111, 100, 001, 010 No particular strategy for a given player guarantees a win 100% of the time. This is because any given input bit for a player (0 or 1) appears in two of the possible input triples. So, if Alice receives a 0, she does not know if the game is in the state (0) or (1). Similarly, for receiving a 1 and game states (2) and (3). The same situation is true for Bob and Charlie. Unfortunately, the appropriate response differs from state to state. If Alice’s strategy is to output what she receives, suppose she receives a 0 and outputs a 0.

25 If the game were in state (0), Bob and Charlie need to output both 1 or both 0. However, if the game were in state (1), only one should output a 0, and the other output a 1. If they have also adopted the same strategy as Alice, they will only win in state (0). Otherwise, they will lose. If Bob, say had adopted the opposite strategy (output the opposite of his input), they would win in state (1) but lose in state (0). Thus, we can generate a sort of payout table. Along the top, we list the players who adopt to simply transmit their input bit as their output. If a player is not listed, they flip their bit.

Gamestate ABC AB BC AC A B C None (0) 1 0 0 0 1 1 1 0 (1) 0 1 1 1 0 0 0 1 (42) (2) 0 1 1 1 0 0 0 1 (3) 0 1 1 1 0 0 0 1 Some strategies do better than others. A strategy in which two players or no players simply transmit their input has an expected payout of 3/4. Strategies in which all players or a single player do so have payout of 1/4. Thus, Alice, Bob, and Charlie should either choose one person to flip their input bit, or should elect to all flip their input bit. The table also suggests an optimal mixed strategy: two players never flip their bit, and 1 the third does so with probability 4 . Expected payout in this case is 0.75.

5.2 Quantum Strategy Alice, Bob, and Charlie can guarantee a win if they are playing a quantum game. If they share the entangled state 1 |ψ i = (|000i − |011i − |101i − |110i) (43) ABC 2 They should adopt the strategy of applying H = √1 1 1  to the qbits corresponding 2 1 −1 to the player who received an input of 1, then simply measuring the state and outputting the bit they measure. So, if Alice gets 0, and Bob and Charlie get 1, they do

1 (I ⊗ H ⊗ H ) |ψ i = (Z X ) (|000i − |011i − |101i − |110i) 0 1 2 ABC 1 2 2 1 = (− |001i − |010i + |100i − |111i) (44) 2

(The proof that (I0 ⊗ H1 ⊗ H2) |ψABC i = (Z1X2) |ψABC i is in [6][p. 158]). Any Z basis measurement at this point will return a viable state (based on the table above) for the players, and since the state is entangled, if Alice measured one of the possible outcomes, Bob and Charlie would measure the same one and outputting the indicated bit would then result in a success. Similarly, we can check the other possible inputs. 1 000 : (I ⊗ I ⊗ I ) |ψ i = (|000i − |011i − |101i − |110i) 0 1 2 ABC 2 (These are the viable output combinations for (0))

26 1 101 : (H ⊗ I ⊗ H ) |ψ i = (Z X ) (|000i − |011i − |101i − |110i) 0 1 2 ABC 0 2 2 1 = (− |001i + |010i − |100i − |111i) , (45) 2

1 110 : (H ⊗ H ⊗ I ) |ψ i = (Z X ) (|000i − |011i − |101i − |110i) 0 1 2 ABC 0 1 2 1 = (− |010i + |001i − |111i − |100i) . (46) 2 Thus, we can see that, when provided with a quantum system, Alice, Bob, and Charlie can coordinate their strategies by simply inspecting their shared state. This allows them to win the game with 100% probability, thus dominating any classical strategy.

6 Conclusion and Extensions

The extension of game theory into a quantum space provides a rich field to explore new solutions to old problems, as well as to introduce some new games that could not have been played in a classical space. As [7] shows, a quantum coin flip game is not necessarily a game of chance, especially when only one of the players has the ability to make quantum moves. We can even find an optimal solution to the Prisoner’s Dilemma, as [5] shows, which has significant implications for the problems of coordinating actions when communication is not possible. This is also demonstrated in [1]’s solution to the GHZ paradox: with a shared quantum system, it is possible for three actors to coordinate their actions without communicating. Thus, we are interested in continuing to search for additional “problems of coordination” that may have been written off as impossible under classical conditions, but which may be easily overcome with quantum systems. As mentioned above, extending games into quantum spaces provides strategies that often overcome inefficiencies that were insurmountable in the classical setting [5]. These results are not always intuitive, as in the Simultaneous PQ Penny Flip, when Picard can win if both he and Q place their coins in an even superposition of heads and tails. This also has implications for what sorts of meaning can be conveyed by systems, which we are interested in exploring further. Another area of further investigation involves the extent to which the starting state of a quantum game influences the optimal strategies. As shown above, playing the Simultaneous PQ Penny Flip when ρ = √1 [|00i+|11i] is leads to a different equilibrium than playing when 2 ρ = √1 [|00i + i |11i] used by [5]. Thus, it would be interesting to explore the implications of 2 different starting states for games, and to what extent a given starting state fails or succeeds in supporting the classical dynamics of the game in the first place. Also, as we expanded the strategic space, new equilbria were revealed, and we are interested in further research to determine the strategic space which contains the most general solution to any particular game. Game theory provides a framework for optimizing decisions and strategies, so it can serve as tool for neural networks to compete against one another for the purposes of finding an optimal solution to a problem. However, it is not always obvious where the optimum

27 solutions are to be found in a game. Using frameworks like Q#, classical neural nets can optimize quantum strategies, and it would be interesting to use game theory and competitive neural networks to develop solutions to both quantum and classical problems.

7 Acknowledgements

I would like to thank my honors advisor, Professor Divya Vernerey, for guiding and encour- aging me as I undertook this project on such a short time frame, and for reminding me that math is supposed to be about exploring interesting ideas, not just feeling stupid all the time (go figure). I would also like to thank the other two members of the committee, Professor Graeme Baird Smith, for teaching an Introduction to Quantum Computer Science class that sustained the wonder and magic of the topic, and Professor Nathaniel Thiem, for making it possible for me to undertake this project near the end of my degree. I would also like to thank Professor Katherine Stange for introducing me to quantum computing in the first place, as well as reminding me of the ways that mathematics can surprise and delight. Her office hours were a reminder to me of the beauty of math and the beauty revealed in nature and human endeavor by knowing the math. I also want to thank my first college math teacher, Professor Kenneth Monks of Front Range Community College, for showing me how wonderfully rewarding it is to wade through a tangle of algebra to find a concise and meaningful result on the other side. He taught me how to approach a seemingly impossible question with a sense of optimism, a skill that has proven essential not just to solve problems, but also to maintain my sanity. I wouldn’t be here if not for him. And of course, I would like to thank my wife, Anna Pusack, for having the courage to undertake her own journey into science and math and sharing in my struggles to understand this new way of seeing the world. She is my partner not just in life, but on this specific journey of returning to school to study a field that was completely outside anything either of us had done before. It’s been an exciting and sometimes downright terrifying journey, but there’s nobody I’d rather share it with.

28 A Fundamentals of Quantum Computing

This appendix is meant to provide a very short introduction to quantum computing for the purposes of understanding the material presented above. It is not intended to be even a minimally comprehensive introduction to quantum computing, and assumes some familiarity with classical computing and basic linear algebra.

A.1 Qubits In classical computing, we use bits to store information. A bit is any physical system for which we can meaningfully distinguish two different states, called 0 and 1. In most modern computers, bits are transistors or other electronics, so that when the physical system is “off”, it is in the 0 state, and when it is “on”, it is in the 1 state. That said, any physical system that can take two distinguishable states can be used as a bit. A quantum bit, or qubit, is similarly any physical system that has two distinguishable states, but which also has the properties of being a quantum system (which means it is either a very small particle or can otherwise be isolated from physical interactions with its environment). It doesn’t matter what the physical system is, and there are a variety of implementations under testing at the moment. An example of a quantum system that could be thought of as a qubit is an electron in a up or spin down state. Generally when we are talking about qubits, we use Dirac notation (for reasons that will become clear in a moment), so that a qubit in a 0 state is denoted |0i and one in a 1 state is denoted |1i. It is useful to think of qubits as unit vectors, since the way that computation is conceptualized with qubits involves operations that are easy to think about as rotations. Thus, a qubit is thought of as a vector in a two dimensional vector space over the complex numbers [6] (which permits thinking of it in a three dimensional sphere, where one of the dimensions is the imaginary part), and the Bloch sphere is a useful mental model [3]. The |0i qubit is pointing to the north pole of the sphere, and the |1i qubit is pointing to the south pole. Using this model, measurement of a qubit amounts to projecting it onto the vertical z-axis. If the qubit is already on the z-axis, as it would be in the |0i or |1i states, then the result is what you would expect: measuring |0i always returns |0i and measuring |1i always returns |1i. Thus, for example, flipping a coin would be equivalent to rotating the qubit representing the coin from the north pole (+1 on the z-axis) to the south pole (−1 on the z-axis). Of course, more interesting plays arise from intermediary moves. Qubits can be placed in states that do not correspond with either of the two |0i or |1i states, however. To understand this, it helps to dig a little into what the notation means.

A.2 Diract Notation and Vector Notation The Dirac notation is useful because it condenses vector notation. So, 1 0 |0i = |1i = . (47) 0 1

Thus, it isn’t too hard to think about what the qubit √1 [|0i + |1i means: 2 1 1 1 √ [|0i + |1i] = √ . (48) 2 2 1

29 Figure 6: The Bloch Sphere with rotation angles marked.[11]

Figure 7: The Bloch Sphere conceptualization of a qubit.[3]

30 In this paper, I have not named this state, but is comes up often enough that it has a label: |+i. The states |0i and |1i are called basis states. From7 we can see that this is the qubit whole state can be thought of as being along the x-axis in the positive direction. Since qubits are always unit vectors, we need to add the coefficient shown. Now, if we were to project this on the z-axis, keeping in mind that qubits are unit vectors, it is not immediately clear whether it would be |0i or |1i, which is where quantum computing gets some of its advantages. If we measured |+i by projecting it to the z-axis, it has a 50% chance of being measured as |0i and a 50% chance of being measured as |1i. Why that is the case is the realm of , but this model matches what we observe in the quantum systems used for computing. Since |+i is exactly halfway between |0i and |1i, this result makes sense, and similarly, a qubit in a state that was more inclined towards the north or south pole would be more likely to be measured as |0i or |1i respectively.

A.3 The Math of Measurement The simple rule of thumb for determining the probability of measuring a qubit in one of the basis states is to take the squared absolute value of the coefficients of each basis state. So for 1 1 |+i = √ |0i + √ |1i , (49) 2 2 2 the probability of measuring |+i as |0i is √1 = 1 . The same is true for the probability 2 2 of measuring it in |1i. In this paper, we have used a more mechanical method of determining probability, which requires introducing the other side of Dirac notation: hx|, the conjugate transpose of the state |xi. 1 In simple terms, if |0i = , then h0| = 1 0, so that 0 1 h0|0i = 1 0 = 1 (50) 0 while

0 h0|1i = 1 0 = 0. (51) 1 Thus, hx|yi is equivalent to taking the inner product of the vectors represented by |xi and |yi. Since these can be complex valued vectors, hx| also involves taking the complex conjugate of the terms in |xi. Most of the calculations in this paper involved real-value vectors, however, so we won’t spend much more time on this. The last concept to cover is what happens when we take |xi hy|, which is an operator on the Hilbert space of the system. An example should suffice. Reverting to vector notation:

1 1 0 |0i h0| = 1 0 = . (52) 0 0 0 Most of the papers cited connected this operator with the corresponding state, so that a qubit in the |0i state is denoted as being in the ρ = |0i h0| state. The distinction does not

31 have a huge impact on our results, but it is useful because it means we don’t need to rely on rules of thumb to find the probabilities: we can actually calculate them using matrix multiplication and matrix traces (the sum of the diagonal entries of a matrix). To return to the example in 49, the probability of measuring |+i in the |0i state would be calculated as

1 tr[|0i h0| |+i h+|] = tr[|0i h0| √ (|0i + |1i) h+|] 2 1 = tr[√ |0i (h0|0i + |0|1i) h+|] 2 1 = tr[√ |0i (1 + 0) h+|] 2 1 = tr[√ |0i h+|] 2 1 1  1  = tr[√ √ 1 1 ] 2 0 2 1 1 = tr[ 1 1] 2 0 1 1 1 = tr[ ] 2 0 0  1 1  = tr[ 2 2 ] 0 0 1 = 2 Thus, we get the same result. In this case, the result is not very interesting, but this process comes in handy for more complex states, and since it is purely mechanical, it can be done by a computer.

A.4 Operations on Qubits Just as we can effect classical bits, we can also operate on qubits. For example, the NOT operation, which takes the classical bit 0 to 1 and 1 to 0, has a correlate in quantum computing, such that NOT |0i = |1i and NOT |1i = |0i. How these gates are actually built or implemented does not concern us. Mathematically, an operation on a qubit is represented by a unitary matrix (a unitary matrix is one whose inverse is the conjugate transpose of that matrix. For real-valued unitaries, they are their own inverses). So, the action of the NOT operation can be modeled thus

0 1 1 0 NOT |0i = = = |1i (53) 1 0 0 1 1 1  as we expect. Another operation of note in this text is the Hadamard: H = √1 . 2 1 −1 The action of this on |0i is notable:

32 1 1 1  1 1 1 H |0i = √ = √ = |+i (54) 2 1 −1 0 2 1 1 1 1  1 1 1 2 H |+i = √ √ = = |0i (55) 2 1 −1 2 1 2 0

A.5 Tensor Products and Qubit Registers When we are dealing with more than a single qubit, we need operations that allow us to work with specific qubits in a register. The two-qubit register |00i has the vector representation

1 0 |00i =   . (56) 0 0 However, thus far we have only introduced 2 × 2 unitary operators that operate on a single qubit. Basic linear algebra will tell us that we cannot use such an operator on |00i since the dimensions of the related objects do not align. We use the tensor product operation to address this problem. The operation of the tensor product is defined in the following way when operating on two vectors.  c ac a a c d ad ⊗ =   =   . (57) b d  c bc b    d bd This gives us a way to formally represent the two-qubit register.  1 1 1 1 1 0 0 |0i ⊗ |0i = ⊗ =   =   = |00i . (58) 0 0  1 0 0    0 0 To apply an operation to a single qubit in a register, we can use the tensor product on matrices. For example, the swap operation X is defined as X |0i = 1 and X |1i = 0. So applying X to only the first qubit of a two-qubit register, denoted X0, yields X0 |00i = |01i. To see this in action, we use tensor products.

1  0 1 0 1 1 1 0 1 0 0 1 0 1 0 1 0 0 (I ⊗ X ) |00i = ⊗   =     0 0 1 1 0 0  0 1 0 1 0   0 1    0 1 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 1 =     =   = |01i . (59) 0 0 0 1 0 0 0 0 1 0 0 0

Thus, the operation X1 |00i is carried out by (X1 ⊗ I) |00i = |10i and (X ⊗ X) |00i = |11i as expected.

33 B Q# Implementation of Simultaneous PQ Penny Flip

The following program is a Q# implementation of the Simultaneous PQ Penny Flip analyzed in section 4.3. It was written using the Q# .NET SDK, and as such, expects to run in a .NET environment from the command line with 1 dotnet run --picard-theta [THETA_P] --picard-phi [PHI_P] --q-theta [THETA_Q] --q-phi [PHI_Q] For more information on setting up and running Q# programs, see Microsoft’s Quantum Documentation. The code can be found at github.com/khaledallen/SimPQPennyFlip. The output of the code is the result of 100 rounds of the game (printing the measured state of the two coins), followed by Picard’s win count (wins minus losses), followed by Picard’s payout for the given move (win count divided by number of games played). Thus, you can explore how different θp, φp, θq, φq influence the payout. 1 namespace PQPennyFlip { 2 3 open Microsoft.Quantum.Convert; 4 open Microsoft.Quantum.Math; 5 open Microsoft.Quantum.Measurement; 6 open Microsoft.Quantum.Canon; 7 open Microsoft.Quantum.Intrinsic; 8 9 operation PrepareBellState(pair : Qubit[]) : Unit { 10 H(pair[0]); 11 CNOT(pair[0], pair[1]); 12 } 13 14 /// 15 /// Summary: The Simultaneous PQPennyFlip Game 16 /// Theta and Phi values for both Picard and Q 17 /// 18 operation SimultaneousPQPennyFlip( 19 picard_theta : Double, 20 picard_phi : Double, 21 q_theta : Double, 22 q_phi : Double 23 ) : Bool { 24 using(coins = Qubit[2]) { 25 PrepareBellState(coins); 26 let picardMoveSecond = Rz(picard_phi, _); 27 let picardMoveFirst = Ry(picard_theta, _); 28 let qMoveSecond = Rz(q_phi, _); 29 let qMoveFirst = Ry(q_theta, _); 30 31 picardMoveSecond(coins[0]); 32 picardMoveFirst(coins[0]); 33 qMoveSecond(coins[1]); 34 qMoveFirst(coins[1]); 35 36 let pResult = M(coins[0]); 37 let qResult = M(coins[1]); 38 39 Message($"[{pResult}, {qResult}]"); 40 41 ResetAll(coins); 42 return pResult == qResult;

34 43 } 44 } 45 46 @EntryPoint() 47 operation RunManyGames( 48 picard_theta : Double, 49 picard_phi : Double, 50 q_theta : Double, 51 q_phi : Double 52 ) : Unit { 53 mutable PicardWinCount = 0; 54 for (index in 0 .. 99) { // iterates over the integers in the Range 0 .. (Length(qubits) - 1) 55 let win = SimultaneousPQPennyFlip(picard_theta, picard_phi, q_theta, q_phi); 56 if(win) { set PicardWinCount += 1; } 57 else { set PicardWinCount += -1; } 58 } 59 let winCountDouble = IntAsDouble(PicardWinCount); 60 Message($"Picard wins: {PicardWinCount}"); 61 Message($"Picard Payout: {winCountDouble/100.0}"); 62 } 63 }

C Mathematica Notebooks for Calculations and Payout Graphs

35 Simultaneous PQ Penny Flip Quan- tum Game Using a Bell State and Rotations as Moves Preliminaries Ry and Rz are rotation matrices used to construct the move set, U. H is the standard Hadamard matrix. CNOT01 is the controlled NOT game, with bit 0 as the control and bit 1 as the target bit.

������ Ry[θ_]:= Cosθ 2,-Sinθ 2, Sinθ 2, Cosθ 2  Rz[ϕ_]:= Exp-ⅈ*ϕ2, 0, 0, Expⅈ*ϕ2  U[θ_,ϕ_]:= Rz[ϕ].Ry[θ] 1 H= {{1, 1},{1,-1}} 2 CNOT01={ {1, 0, 0, 0}, {0, 1, 0, 0}, {0, 0, 0, 1}, {0, 0, 1, 0} } Zero = {{1},{0}} One = {{0},{1}} 1 1 1 1 ������  , , ,-  2 2 2 2

������ {{1, 0, 0, 0},{0, 1, 0, 0},{0, 0, 0, 1},{0, 0, 1, 0}}

������ {{1},{0}}

������ {{0},{1}}

The computation basis state is a two qubit state with both qubits set to 0. The starting state is the first bell state, constructed by applying a Hadamard to bit 0, and then applying a CNOT01.

The general end state is formed by applying UP ⊗U Q to ρ.

���������������������������������������������� 2 ��� SimPQPennyFlipBellState.nb

������ CompBasis= KroneckerProduct[Zero, Zero] ρ= CNOT01.KroneckerProduct[H, IdentityMatrix[2]].CompBasis σ= KroneckerProduct[U[θp,ϕp],U[θq,ϕq]].ρ

������ {{1},{0},{0},{0}} 1 1 ������  ,{0},{0},  2 2

ⅈ ϕp ⅈ ϕq ⅈ ϕp ⅈ ϕq - - θp θq - - θp θq ⅇ 2 2 Cos  Cos  ⅇ 2 2 Sin  Sin  2 2 2 2 ������  + , 2 2

ⅈ ϕp ⅈ ϕq ⅈ ϕp ⅈ ϕq - + θq θp - + θp θq ⅇ 2 2 Cos  Sin  ⅇ 2 2 Cos  Sin  - 2 2 + 2 2 , 2 2

ⅈ ϕp ⅈ ϕq ⅈ ϕp ⅈ ϕq - θq θp - θp θq ⅇ 2 2 Cos  Sin  ⅇ 2 2 Cos  Sin   2 2 - 2 2 , 2 2

ⅈ ϕp ⅈ ϕq ⅈ ϕp ⅈ ϕq + θp θq + θp θq ⅇ 2 2 Cos  Cos  ⅇ 2 2 Sin  Sin   2 2 + 2 2  2 2 Payoff Function The payoff function is calculated by investigating the inner product of the end state with the desired

state to compare with: ie 〈ψHH || σ〉. Mathematica doesn’t evaluate this in a nice way, but a hand calcula- tion gives a cleaner formula. Or you can just use this one.

������ 1 2 , 0, 0, 1 2.σ

ⅈ ϕp ⅈ ϕq ⅈ ϕp ⅈ ϕq ⅈ ϕp ⅈ ϕq ⅈ ϕp ⅈ ϕq - - θp θq - - θp θq + θp θq + θp θq ⅇ 2 2 Cos  Cos  ⅇ 2 2 Sin  Sin  ⅇ 2 2 Cos  Cos  ⅇ 2 2 Sin  Sin  2 2 + 2 2 2 2 + 2 2 2 2 2 2 ������  +  2 2

������ PicardPayoff[θp_,ϕp_,θq_,ϕq_]:=N ϕp+ϕq θp θq θp θq Abs Cos  Cos  Cos + Sin  Sin  ^2 2 2 2 2 2 ϕp+ϕq θq θp θp θq - Abs Sin  Cos  Sin + Cos  Sin  ^2 2 2 2 2 2 ϕp+ϕq θp θq θq θp - Abs Cos  Cos  Sin - Cos  Sin  ^2 2 2 2 2 2 ϕp+ϕq θp θq θp θq + Abs Sin  Sin  Sin - Cos  Cos  ^2 2 2 2 2 2  Payoff Function Graphs

���������������������������������������������� SimPQPennyFlipBellState.nb ���3

������ h[t_, r_]:= PicardPayoff[t* Pi, 0, r* Pi, 0] Plot3D[ {h[t, r]},{t, 0, 1},{r, 0, 1}, PerformanceGoal→ "Quality", Exclusions→ None, Mesh→{20, 20,{0, Thick}}, PlotLegends→ Automatic, ColorFunction→ "Rainbow", LabelStyle→{FontSize→ Larger, FontFamily→ "Times", FontWeight→ "Bold", FontSlant→ "Italic"},

AxesLabel→{Style["U P", Large], Style["UQ", Large]}, Ticks -> { {{-1, "Q"},{0, "H"},{1, "T"}}, {{-1, "Q"},{0, "H"},{1, "T"}}, {1, 2, 3, 4, 5}, Automatic} ]

���

���

������ �

-���

-���

���������������������������������������������� 4 ��� SimPQPennyFlipBellState.nb

������ g[t_, r_]:= Which 0≤t≤ 1 && 0≤r≤ 1, PicardPayoff[t* Pi, 0, r* Pi, 0], 0≤t≤ 1 &&-1≤r≤ 0, PicardPayofft* Pi, 0,-r* Pi,-r* Pi2, -1≤t≤ 0 && 0≤r≤ 1, PicardPayoff-t* Pi,-t* Pi 2, r* Pi, 0, -1≤t≤ 0 &&-1≤r≤ 0, PicardPayoff-t* Pi,-t* Pi 2,-r* Pi,-r* Pi2  Plot3D[ {g[t, r]},{t,-1, 1},{r,-1, 1}, PerformanceGoal→ "Quality", Exclusions→ None, Mesh→{20, 20,{0, Thick}}, PlotLegends→ Automatic, ColorFunction→ "Rainbow", LabelStyle→{FontSize→ Larger, FontFamily→ "Times", FontWeight→ "Bold", FontSlant→ "Italic"},

AxesLabel→{Style["U P", Large], Style["UQ", Large]}, Ticks -> { {{0.5, "O"},{-0.5, "M"},{0, "H"},{1, "T"}}, {{0.5, "O"},{-0.5, "M"},{0, "H"},{1, "T"}}, {1, 2, 3, 4, 5}, Automatic} ]

���

���

������ �

-���

-���

���������������������������������������������� SimPQPennyFlipBellState.nb ���5

������ PicardPayoffPi, Pi 2, Pi, Pi2

������ 1.

"Picard→" 0, 0 π , 0 π, 0 0, π π , π 2 4 2 4 "Q↓" 0, 0(1,-1) 0, 0 (-1, 1)(1,-1)0, 0 π , 00, 0 (1,-1) 0, 0 0, 0  1 ,- 1  2 2 2 π, 0(-1, 1) 0, 0 (1,-1) (-1, 1)0, 0 0, π (1,-1) 0, 0 (-1, 1)(1,-1)0, 0 4 π , π 0, 0  1 ,- 1  0, 0 0, 0 0, 0 2 4 2 2 π, π (-1, 1) 0, 0 (1,-1) (-1, 1)0, 0 4 0, π (1,-1) 0, 0 (-1, 1)(1,-1)0, 0 2 π , π 0, 0 0, 0 0, 0 0, 0 - 1 , 1  2 2 2 2 π, π (-1, 1) 0, 0 (1,-1) (-1, 1)0, 0 2

"Picard→" π, π 0, π π , π π, π 4 2 2 2 2 "Q↓" 0, 0(-1, 1)(1,-1)0, 0(-1, 1) π , 00, 0 0, 0 0, 0 0, 0 2 π, 0(1,-1) (-1, 1)0, 0(1,-1) 0, π (-1, 1)(1,-1)0, 0(-1, 1) 4 π , π 0, 0 0, 0 - 1 , 1  0, 0 2 4 2 2 π, π (1,-1) (-1, 1)0, 0(1,-1) 4 0, π (-1, 1)(1,-1)0, 0(-1, 1) 2 π , π 0, 0 0, 0(-1, 1)0, 0 2 2 π, π (1,-1) (-1, 1)0, 0(1,-1) 2 Using Eisert, Wilkens’s and Lewenstein’s Starting States and Move Set

������ PicardPayoffEisert[θp_,ϕp_,θq_,ϕq_]:=N AbsCos[ϕp+ϕq] Cosθp2 Cosθq2^2 - AbsSin[ϕp] Cosθp2 Sinθq2- Cos[ϕq] Cosθq2 Sinθp2^2 - AbsSin[ϕq] Cosθq2 Sinθp2- Cos[ϕp] Cosθp2 Sinθq2^2 + AbsSin[ϕp+ϕq] Cosθp2 Cosθq2+ Sinθp2 Sinθq2^2 

���������������������������������������������� 6 ��� SimPQPennyFlipBellState.nb

������ h[t_, r_]:= PicardPayoffEisert[t* Pi, 0, r* Pi, 0] Plot3D[ {h[t, r]},{t, 0, 1},{r, 0, 1}, PerformanceGoal→ "Quality", Exclusions→ None, Mesh→{20, 20,{0, Thick}}, PlotLegends→ Automatic, ColorFunction→ "Rainbow", LabelStyle→{FontSize→ Larger, FontFamily→ "Times", FontWeight→ "Bold", FontSlant→ "Italic"},

AxesLabel→{Style["U P", Large], Style["UQ", Large]}, Ticks -> { {{-1, "Q"},{0, "H"},{1, "T"}}, {{-1, "Q"},{0, "H"},{1, "T"}}, {1, 2, 3, 4, 5}, Automatic} ]

���

���

������ �

-���

-���

���������������������������������������������� SimPQPennyFlipBellState.nb ���7

������ g[t_, r_]:= Which 0≤t≤ 1 && 0≤r≤ 1, PicardPayoffEisert[t* Pi, 0, r* Pi, 0], 0≤t≤ 1 &&-1≤r≤ 0, PicardPayoffEisertt* Pi, 0,-r* Pi,-r* Pi2, -1≤t≤ 0 && 0≤r≤ 1, PicardPayoffEisert-t* Pi,-t* Pi 2, r* Pi, 0, -1≤t≤ 0 &&-1≤r≤ 0, PicardPayoffEisert-t* Pi,-t* Pi 2,-r* Pi,-r* Pi2  Plot3D[ {g[t, r]},{t,-1, 1},{r,-1, 1}, PerformanceGoal→ "Quality", Exclusions→ None, Mesh→{20, 20,{0, Thick}}, PlotLegends→ Automatic, ColorFunction→ "Rainbow", LabelStyle→{FontSize→ Larger, FontFamily→ "Times", FontWeight→ "Bold", FontSlant→ "Italic"},

AxesLabel→{Style["U P", Large], Style["UQ", Large]}, Ticks -> { {{0.5, "O"},{-0.5, "M"},{0, "H"},{1, "T"}}, {{0.5, "O"},{-0.5, "M"},{0, "H"},{1, "T"}}, {1, 2, 3, 4, 5}, Automatic} ]

���

���

������ �

-���

-���

���������������������������������������������� 8 ��� SimPQPennyFlipBellState.nb

������ PicardPayoffEisertPi, Pi 2, Pi, Pi2

������ 1.

"Picard→" 0, 0 π , 0 π, 0 0, π π , π 2 4 2 4 "Q↓" 0, 0(1,-1) 0, 0 (-1, 1)(1,-1) 0, 0 π , 00, 0 0, 0 0, 0 0, 0  1 , -1  2 2 2 π, 0(-1, 1) 0, 0 (1,-1) (-1, 1) 0, 0 0, π (1,-1) 0, 0 (-1, 1)(1,-1) 0, 0 4 π , π 0, 0  1 , -1  0, 0 0, 0 (1,-1) 2 4 2 2 π, π (-1, 1) 0, 0 (1,-1) (-1, 1) 0, 0 4 0, π (1,-1) 0, 0 (-1, 1)(1,-1) 0, 0 2 π , π 0, 0 0, 0 0, 0 0, 0  1 , -1  2 2 2 2 π, π (-1, 1) 0, 0 (1,-1) (-1, 1) 0, 0 2

"Picard→" π, π 0, π π , π π, π 4 2 2 2 2 "Q↓" 0, 0(-1, 1)(1,-1) 0, 0 (-1, 1) π , 00, 0 0, 0 (1,-1) 0, 0 2 π, 0(1,-1) (-1, 1) 0, 0 (1,-1) 0, π (-1, 1)(1,-1) 0, 0 (-1, 1) 4 π , π 0, 0 0, 0  1 , -1  0, 0 2 4 2 2 π, π (1,-1) (-1, 1) 0, 0 (1,-1) 4 0, π (-1, 1)(1,-1) 0, 0 (-1, 1) 2 π , π 0, 0 0, 0 0, 0 0, 0 2 2 π, π (1,-1) (-1, 1) 0, 0 (1,-1) 2

���������������������������������������������� D Bibliography References

[1] Khaled Allen. Homework 6 in CSCI3090 Introduction to Quantum Computing. Uni- versity of Colorado Boulder. 2019. [2] Andreas Boukas. “Quantum Formulation of Classical Two Person Zero-Sum Games”. In: Open Systems & Information Dynamics 7 (Mar. 2000), pp. 19–32. doi: 10.1023/A: 1009699300776. [3] Microsoft Corporation. Microsoft Quantum Documentation - Concepts: The Qubit. Accessed October, 2020. url: https://docs.microsoft.com/en- us/quantum/ concepts/the-qubit. [4] Gordon B. Dahl and Steven E. Landsburg. “Quantum Strategies”. In: (2011). arXiv: 1110.4678 [math.OC]. [5] J. Eisert, M. Wilkens, and M. Lewenstein. “Quantum games and quantum strategies”. In: Physical Review Letters 83.15 (1999), pp. 3077–3080. [6] N. D. Mermin. Quantum computer science: an introduction. Cambridge: Cambridge University Press, 2007. [7] DA Meyer. “Quantum strategies”. In: Physical Review Letters 82.5 (1998), pp. 1052– 1055. [8] Jon von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton University Press, 1944. [9] Martin J. Osborne and . A Course in Game Theory. The MIT Press, 1994. [10] E. W. Piotrowski and J. S ladkowski. “An Invitation to Quantum Game Theory”. In: International Journal of Theoretical Physics 42.5 (2003), pp. 1089–1099. [11] Smite-Meister. Bloch sphere. Own work, CC BY-SA 3.0. Accessed Oct, 2020. url: https://commons.wikimedia.org/w/index.php?curid=5829358. [12] Graeme Baird Smith and Alexandra Kolla. GHZ Puzzle Lecture Slides. CSCI3090 at University of Colorado Boulder. 2019. url: https://home.cs.colorado.edu/ ~alko5368/lecturesCSCI3090/feb28.pdf.

44