<<

5. E. Bernstein and U. Vazirani, ªQuantum complexity 20. A. K. Lenstra and H. W. Lenstra, Jr., eds., The Devel- theory,º in Proc. 25th ACM Symp. on Theory of Com- opment of the Number Field Sieve, Lecture Notes in putating, . 11±20 (1993). Mathematics No. 1554, Springer-Verlag (1993); this 6. A. Berthiaume and G. Brassard, ªThe quantum chal- book contains reprints of the articles that were critical lenge to structural complexity theory,º in Proc. 7th in the development of the fastest known factoring al- Conf. on Structure in Complexity Theory, IEEE Com- gorithm. puter Society Press, pp. 132±137 (1992). 21. H. W. Lenstra, Jr. and C. Pomerance, ªA rigorous 7. A. Berthiaume and G. Brassard, ªOracle quantum time bound for factoring integers,º J. Amer. Math. Soc. computing,º in Proc. Workshop on Physics and Com- Vol. 5, pp. 483±516 (1992). putation, pp. 195±199, IEEE Computer Society Press 22. S. Lloyd, ªA potentially realizable quantum com- (1992). puter,º , Vol. 261, pp. 1569±1571 (1993). 8. D. Coppersmith, ªAn approximate Fourier transform 23. S. Lloyd, ªEnvisioning a quantum supercomputer,º useful in quantum factoring,º IBM Research Report Science, Vol. 263, p. 695 (1994). RC 19642 (1994). 24. G. L. Miller, ªRiemann's hypothesis and tests for 9. D. Deutsch, ªQuantum theory, the Church±Turing primality,º J. Comp. Sys. Sci. Vol. 13, pp. 300±317 principle and the universal quantum computer,º Proc. (1976). Roy. Soc. Lond. Ser. A, Vol. 400, pp. 96±117 (1985). 25. S. Pohligand M. Hellman, ªAn improved algorithm for 10. D. Deutsch, ªQuantum computing discrete logarithms over GF(p) and its cryp- computational networks,º Proc. Roy. Soc. Lond. Ser. A, tographic signi®cance,º IEEE Trans. Information The- Vol. 425, pp. 73±90 (1989). ory, Vol. 24, pp. 106±110 (1978). 11. D. Deutsch and R. Jozsa, ªRapid solution of problems 26. C. Pomerance, ªFast, rigorous factoriza- by quantum computation,º Proc. Roy. Soc. Lond. Ser. tion and discrete logarithm algorithms,º in Discrete Al- A Vol. 439, pp. 553±558 (1992). gorithms and Complexity (Proc. Japan-US Joint Sem- 12. D. P. DiVincenzo, ªTwo-bit gates are univer- inar), pp. 119-143, Academic Press (1986). sal for quantum computation,º Phys. Rev. A, Vol. 51, 27. R. L. Rivest, A. Shamir, and L. Adleman ªA method of pp. 1015±1022 (1995). obtaining digital signatures and public-key cryptosys- 13. R. Feynman, ªSimulating physics with computers,º In- tems,º CommunicationsACM, Vol. 21, No. 2, pp. 120± ternational Journal of Theoretical Physics, Vol. 21, 126 (1978).

No. 6/7, pp. 467±488 (1982). 28. A. Shamir, ªIP ¤ PSPACE,º in Proc. 31th Ann. Symp. 14. R. Feynman, ªQuan- on Foundations of Computer Science, pp. 11±15, IEEE tum mechanical computers,º Foundations of Physics, Computer Society Press (1990). Vol. 16, pp. 507±531 (1986). (Originally appeared in 29. D. Simon, ªOn the power of quantum computation,º Optics News, February 1985.) in Proc. 35th Ann. Symp. on Foundations of Computer 15. L. Fortnow and M. Sipser, ªAre there interactive proto- Science, pp. 116±123, IEEE Computer Society Press cols for co-NP languages?º Inform. Proc. Lett. Vol. 28, (1994). pp. 249±251 (1988). 30. W. G. Teich, K. Obermayer, and G. Mahler, ªStruc- 16. D. M. Gordon, ªDiscrete logarithms in GF(p) using the tural basis of multistationary quantum systems II: Ef- number ®eld sieve,º SIAM J. Discrete Math. Vol. 6, fective few-particle dynamics,º Phys. Rev. B, Vol. 37, pp. 124±139 (1993). pp. 8111±8121 (1988). 17. G. H. Hardy and E. M. Wright, An Introduction to the 31. T. Toffoli, ªReversible computing,º in Automata, Lan- Theory of Numbers, Fifth Edition, Oxford University guages and Programming, Seventh Colloq., Lecture Press, New York (1979). Notes in Computer Science No. 84 (J. W. De Bakker 18. R. Landauer, ªIs quantum mechanically coherent com- and J. van Leeuwen, eds.) pp. 632-644, Springer- putation useful?º in Proceedings of the Drexel-4 Verlag (1980). Symposium on Quantum Nonintegrability Ð Quantum 32. W. G. Unruh, ªMaintaining in quantum Classical Correspondence (D. H. Feng and B-L. Hu, computers,º Phys. Rev. A, Vol. 51, pp. 992±997 (1995). eds.) International Press, to appear. 33. A. Yao, ªQuantum circuit complexity,º in Proc. 34th

19. Y. Lecerf, ªMachines de Turing reversibles. Recursive Ann. Symp. on Foundations of Computer Science,

¡ ¢

pp. 352±361, IEEE Computer Society Press (1993). ¦¨§©£ insolubilite en de l'equation £¥¤ ,

ouÁ ¦ est un isomorphisme de codes,º Comptes Ren- dues de l'Academie FrancËaise des Sciences, Vol. 257, pp. 2597-2600 (1963).

11 D

where this equation was obtained from Condition (7.34) by further increase of D by 1; thus for some large constant it ¥2T

dividing by . The ®rst thing to notice is that the multi- is less than .

¡ ¢¤£¦¥ §

plier on is a fraction with denominator , since Recall that each good ( is obtained with probability at

§©¨ ¢ £ ¥ £§©¨ ¢£¥  %¥©C

evenly divides . Thus, we need only least ¥©%¥©7 from any experiment. Since there are

 ¥¨¢£ ¥

§ ¥7CD

round off to the nearest multipleof and divide good ( 's, after experiments, we are likely to obtain

¨! #"%$&¢£'¥© D §

by a sample of good ( 's chosen equally likely from all good

§ § (

( 's. Thus, we will be able to ®nd a set of 's such that all

§©¨ ¢£¥ *£+§©¨¢£'¥ 



?

5'TC ¢ ¢V£+¥

=

< ©¤

§)( (7.40) prime powers dividing are relatively prime

?

§ ¢ to at least one of these ( 's. For each prime less than 20,

to ®nd a candidate ¡ . To show that this experiment need we thus have at most 20 possibilities for the residue mod-

? ?

?

¢ W ¢ only be repeated a polynomial number of times to ®nd the ulo <%= , where is the exponent on prime in the prime

factorization of ¢#£@¥ . We can thus try all possibilites for correct ¡ requires only a few more details. The problem is

again that we cannot divide by a number which is not rela- residues modulo powers of primes less than 20: for each ¡ tively prime to ¢£¥ . possibility we can calculate the corresponding using the

For the general case of the discrete log algorithm, we Chinese remainder theorem, and then check to see whether § it is the desired discrete logarithm. do not know that all possible values of ( are generated with reasonable likelihood; we only know this about one- This algorithm does not use very many properties of XZY ,

tenth of them. This additional dif®culty makes the next step so we can use the same algorithmto ®nd discrete logarithms [\Y harder than the corresponding step in the two previous al- over other ®elds such as G . What we need is that we

gorithms. If we knew the remainder of ¡ modulo all prime know the order of the generator, and that we can multiply and take inverses of elements in polynomial time. powers dividing ¢#£,¥ , we could use the Chinese remain-

der theorem to recover ¡ in polynomial time. We will only If one were to actually program this algorithm (which be able to ®nd this remainder for primes larger than 20, but must wait until a quantum computer is built) there are many

with a little extra work we will still be able to recover ¡ . ways in which the ef®ciency could be increased over the ef-

What we have is that each good ¨-§.) pair is generated ®ciency shown in this paper.

with probability at least / ¥0213¢4 65,¥%¥©7 , and that at least ¨-§.)

a tenth of the possible § 's are in a good pair. From Acknowledgements §32

Eq. (7.40), it follows that these § 's are mapped from to

§ ¨ ¢ £8¥©

( by rounding to the nearest integer multiple of I would like to thank Jeff Lagarias for ®nding and ®x- §

¥¨¢9£+¥ . Further, the good 's are exactly those in which ing a critical bug in the ®rst version of the discrete log al-

§32 § ¨¢:£;¥ §

is close to ( . Thus, each good corresponds gorithm. I would also like to thank him, Charles Bennett, §

with exactly one ( . We would like to show that for any Gilles Brassard, Andrew Odlyzko, Dan Simon, Umesh

?

¢4<>= ¢£@¥ §

prime power dividing , a random good ( is un- Vazirani, as well as other correspondents too numerous to ? likely to contain ¢ . If we are willing to accept a large con- list, for productive discussions, for corrections to and im- stant for the algorithm, we can just ignore the prime powers provements of early drafts of this paper, and for pointers to

under 20; if we know ¡ modulo all prime powers over 20, the literature. we can try all possible residues for primes under 20 with

only a (large) constant factor increase in running time. Be- References ¨A§.B

cause at least one tenth of the § 's were in a good pair, §

at least one tenth of the ( 's are good. Thus, for a prime 1. P. Benioff, ªQuantum mechanical Hamiltonian models

? ?

¢ § ¢

= =

< <

power , a random good ( is divisible by with proba- of Turing machines,º J. Stat. Phys. Vol. 29, pp. 515±

?

¥©C3¢ D §

= < bilityat most . If we have good ( 's, the probability 546 (1982). of having a prime power over 20 that divides all of them is 2. P. Benioff, ªQuantum mechanical Hamiltonian mod- therefore at most els of Turing machines that dissipate no energy,º Phys.

P Rev. Lett. Vol. 48, pp. 1581±1585 (1982).

E ¥C

. (7.41) 3. C. H. Bennett, ªLogical reversibility of computation,º

?+Q6R

¢ <>=

FAG IBM J. Res. Develop. Vol. 17, pp. 525±532 (1973).

=

=HI!J

G FNMO

F 4. C. H. Bennett, E. Bernstein, G. Brassard and U. Vazi- =

=LK rani, ªStrengths and weaknesses of quantum comput-

where the sum is over all prime powers greater than 20 that ing,º manuscript (1994). Cur- 5@T2C

divide ¢S£'¥ . This sum (over all integers ) converges rently available through the World-Wide Web at URL

¤UT for D , and goes down by at least a factor of 2 for each http://vesta.physics.ucla.edu:7777/.

10

¡ ¡

¥ 2TC

Note that we now have two moduli to deal with, ¢*£ and . then is always at most , so the phase of the sec- L

While this makes keeping track of things more confusing, ond exponential term in Eq. (7.31) never is farther than

£

¨ H?I!>¥C

we will still be able to obtain ¡ using a algorithm similar to from 1. By combining these two observations,

¡

.B4.£¢¥¤

the easy case. The probability of observing a state § we will show that if both conditions hold, then the contribu-

¨ L"%$ ¢Z

with ¢§¦©¨ is, almost as before, tion to the probability from the corresponding term is sig-

ni®cant. Furthermore, both conditions will hold with con-



§

stant probability, and a reasonable sample of 's for which

E

?

¥

££

¡

¨ §"!$#3 %

Condition (7.34) holds will allow us to deduce .

 (7.28)

¨ ¢ £¥©

 

We now give a lower bound on the probability of each

)M  

good output, i.e., an output that satis®es Conditions (7.34)

# ¢ £T

& 4.'# @£ ¡(#)¦

where the sum is over all ¨ such that and (7.35). We know that as ranges from 0 to , the

*

@

¨-T¥H?I   C T¥H?I\[

¨ L"%$ ¢£ ¥©

. We now use the relation phase of J ranges from to where

*

¢£+T



¤+#3¡,! £ ¨¢£ ¥ .-0/21£3

Y5476"8

1

(7.29) % [ ¤ ¡§]!+V£ §¨¢£¥© ) £^Y 54:6

Y (7.36) and substitute in the above expression to obtain the ampli-

and Y is as in Eq. (7.34). Thus, the component of the am- tude

plitude of the ®rst exponential in Eq. (7.31) in the direction

Y¥4



E

?

*

6



@

@ £

9

-C/21£3 %

¨&H?I\[;

#3¡§?! §?!B#3 £ §©¨¢ £+¥© /

8,D

Y¥476 54:6<; Y (7.37)

$A 

¡ ¡

/ =?>

)"5`3¨-T5H [ 2T6£a[+#3¨¢£+T

(7.30) is at least _ . Now, by Condi- %¥©C The absolute value of the square of this amplitude is H?I

tion (7.35), the phase can vary by at most due to the

£

§.B4.E¨F ¨! #"$ ¢4 £G

5H?I  

the probability of observing the state . ¨-T second exponential L . Applying this variation

We will now analyze this expression. First, a factor of in the manner that minimizes the component in the direc-

*

£

¥H?I §3 

¨AT can be taken out of all the terms and ignored, tion (7.37), we get that the component in this direction is at

¡ ¡

"¥`3¨AT¥H [ T6£$[+#3¨ ¢£¤T2 !bH >¥C2 ¢ZT@

because it does not change the probability. Next, we split least _ . Since ,

¡ ¡

#

[ ¥2T

the exponent into two parts and factor out to obtain and from Condition (7.34), V , putting everything

¡ '¢5¤

together, the probability of arriving at a state §.)4. that

Y54



E

? ?

¥

£ £ £

£ satis®es both Condition (7.34) and (7.35) is at least

% .

% (7.31) KJ ML

¨ ¢£'¥

c



/ =?>

(g£

>

¥ T

)"¥`D& D

where _ (7.38)

Hed©f

6

(g

>

¤ #ON .

J 

or at least / ¥0212 .

1

N ¤ ¡§P!  £ §©¨¢£'¥© ) . 54:6 Y (7.32)

We will now count the number of pairs ¨-§.) satisfying and Conditions (7.34) and (7.35). The number of pairs ¨A§.)

such that (7.34) holds is exactly the number of possible § 's,



¤ /21 £Q-0/21'3 %§¨¢£'¥ /

§ 

8

54:6 Y¥476 Y (7.33) L since for every there is exactly one such that (7.34) holds

(round off the fraction to the nearest integer to obtain this

SR R9¨! #"$ 

 §

Here by we mean the residue of with  ). The number of 's for which (7.35) holds is approxi-

£ TUT,SR T

WV

>¥C ¨-§ .B

. We will show that if we get enough mately >¥C . Thus, there are pairs satisfying

¡

£ ¥

ªgoodº outputs, then we still can deduce , and that fur- both conditions. Multiplying by ¢ , which is the number

¡

¢ ¢4 %¥©C '¢5¤ thermore, the chance of getting a ªgoodº output is constant. of possible 's, gives approximately states §.)4. .

The idea is that if Combining this calculation with the lower bound on the

¡

¡ probability of each good state gives us that the probabil-

¥

1

XN  ¤ ¡§"!  £ §©¨ ¢£ ¥© ) £ZY .

V

54:6

Y (7.34)

5iC

ity of obtaining any good state is at least ¢Z , or at least

T

jT T ¢

¥>¥72C (since ).

N  #

where Y is the closest integer to , then as varies be- §.) We now want to recover ¡ from a pair such that

tween 0 and ¢#£'T , the phase of the ®rst exponential term

P

¥  ¡ §©¨¢£ ¥  ¥

in Eq. (7.31) only varies over at most half of the unit circle. 

£ ! § £ ¨! #"$¥© 3.

V V

Q

¢£ ¥ T

Further, if T2

¡ ¡

§©¨ ¢£ ¥© ) TC>. V  (7.35) (7.39)

9 0.08

0.07

0.06

0.05

0.04

0.03

0.02

0.01

0 0 20 40 60 80 100 120 140 160 180 200 220 240

c

¡£¢¥¤§¦©¨ ¢  Figure 1: The probability of observing values of between 0 and 239, given and , using the described

for factoring. With high probability the observed value of is near an integer multiple of 240/13.

*

 " "*¡

i.e., if there is a  such that fraction of the time, so by repeating this ex-

¨ " "&¡

periment only  times, we are assured of a high

£ ¡ ¡

¡§ £+ / V

V (6.24) probability of success. T

T Note that in the algorithm for order, we did not use many

¨! L"%$

Dividing by ¡ and rearranging the terms gives of the properties of multiplication . In fact, if we

 C>.¥2.BT>./3/©/A. £'¥

have a permutation mapping the set

9

*

;

§ ¥ 

 ¨ 

into itself such that its th iterate, , is computable in

£ / V

(6.25) *

¡ T

"   " 

time polynomial in  and , the same algorithm will 

be able to ®nd the order of an element under , i.e., the

9



;

 T

We know § and . Because , there is at most one

¡  ¨&  ¤

minimum such that 1 .

¡©T fraction ¡ with that satis®es the above inequal- ity. Thus, we can obtain the fraction ¡ in lowest terms 7 Discrete log: the general case

by rounding §3 to the nearest fraction having a denomina-

tor smaller than . This fraction can be found in polyno- For the general case, we ®rst ®nd a smooth number

§3

¢ ¢ T3¢ mial time by using a continued fraction expansion of , V such that is close to , i.e., with V (see Lemma which ®nds all the best approximations of §32 by fractions 3.2).

[17, Chapter X]. Next, we do the same thingas in the easy case, that is, we 

If we have the fraction ¡ in lowest terms, and if hap-

# ¨! #"$ ¢:£ ¥©

choose and uniformly , and then compute

4 ¡

pens to be relatively prime to ¡ , this will give us . We will ¨!  ¨ L"%$ ¢Z /

. This leaves our machine in the state

O 6¨! #"%$ G

now count the number of states §. which en-

Y54 Y54

 

E E

¥

¡  ¨-¡

able us to compute in this way. There are possible 4

/

4.'# .<¨  £G /

¨! #"%$&¢4 (7.26)

 ¡  

values for relatively prime to , where is Euler's func- ¢£'¥

=?>

/ =?> §32

tion. Each of these fractions 2¡ is close to one fraction

¡ ¡

 #

As before, we use the Fourier transform " to send

§32 £;¡ ¥2T ¡

with V . There are also possible val-

6

@

§ #$#  ¨ L"%$  5H?IA¨ § !

and , with amplitude ¨-T 

O ¡  ¡ ¨-¡

ues for  , since is the order of . Thus, there are

#3 2 

G

 6¨ L"%$ ¡

states §. which would enable us to obtain . , giving us the state

Y54 4:6 

Since each of these states occurs with probability at least 

E E

?



6

4

£

£

¥20¡ ¡  ¨-¡ N0¡ 9

G

§.)Z.E¨! & ¨! #"%$&¢4 / ¨& §C!W#3

, we obtain with probability at least . Us- /

D

Y54:6<;

 

A

*

'

¨A¡ 2¡ 5  " "*¡

ing the theorem that  for some ®xed

*

&% % (

/ =?> =?>

[17, Theorem 328], this shows that we ®nd ¡ at least a (7.27)

8

" # §

for quantum algorithms. Although Bernstein and Vazirani We then perform our Fourier transform  mapping

6

£

O¥¤

¥H?I2 §  

[4] show that the number of bits of precision needed is with amplitude ¨AT . This leaves our ma- 

never more than the logarithm of the number of computa- chine in state I 476

tional steps a quantum computer takes, in some algorithms 

E

¡

¥

it could conceivably require less. Interesting open ques- £

5H?I §32  §. ¨! L"%$ <¤ / ¨-T (6.18)

tions are whether it is possible to do discrete logarithms or factoring with less than polynomial precision and whether =?> some tradeoff between precision and time is possible. Finally, we observe the machine. It would be suf®cient

to observe solely the value of § , but for clarity we will as-

 6¨! #"$ sume that we observe both § and . We now

6 Factoring compute the probability that our machine ends in a particu-

*

G

§. 6¨ L"%$ C T lar state , where we may assume V

The algorithm for factoring is similar to the one for the ¡ . Summing over all possible ways to reach this state, we

general case of discrete log, only somewhat simpler. I ®nd that this probability is

present this algorithm before the general case of discrete



E

¥

log so as to give the three algorithms in this paper in order

£

¨-T5H?I §32  /

(6.19)

of increasing complexity. Readers interested in discrete log

©¨ 

§¦

may skip to the next section.

C ©T  B¦

Instead of giving a quantum computer algorithm to fac- where the sum is over all , V , such that

O 6¨! L"%$  ¡

tor , we will give a quantum computer algorithm for  . Because the order of is , this sum is equiv-

*

^¦ ¨! #"%$ ¡ ¤

®nding the order of an element  in the multiplicative alently over all satisfying . Writing

*

#3¡,!

¨ L"%$ ¡  ¦

group ; that is, the least integer such that 1 , we ®nd that the above probability is

¥ ¨ L"%$

9

. There is a randomized reduction from factor- 

4 4:6<;



g

E 1

*

ing to the order of an element [24]. ¥

@

/ ¨-T¥H?I ¨&# ¡ ! N§3 

(6.20)

To factor an odd number , given a method for comput-

/ =?>

ing the order of an element, we choose a random  , ®nd

¢

*

g£

@

¡¡  F_3$4¨  £ ¥2.

the order of , and compute 1 . This

¥H?I §32 

We can ignore the term of ¨-T , as it can be fac-

¡

fails to give a non-trivial divisor of only if is odd or if tored out of the sum and has magnitude 1. We can also

¢

g£

 ¦ £V¥ ¨! #"$

1 . Using this criterion, it can be shown

¡§© ¡§© replace ¡§ with , where is the residue which

that the algorithm ®nds a factor of with probabilityat least

¨! #"$  £ T T

is congruent to ¡§ and is in the range

*

4:6

¥©T¥

¥£ , where is the number of distinctodd prime fac-

¡§ 2T

V . This leaves us with the expression

tors of . This scheme will thus work as long as is not a

9



4 476\;



prime power; however, factoring prime powers can be done g

1

E

¥

£

/ ¨-T5H?I#©¡§©  

ef®ciently with classical methods.  (6.21)

 ¡  ¦U¥ ¨ L"%$

Given and , to ®nd such that 1 , we

/ =?>



jT T

do the following. First, we ®nd a smooth with V

£

¡§©



 We will now show that if is small enough, all the am- . Next, we put our machine in the uniform superposi-

plitudes in this sum will be in nearly the same direction,

9¨ L"%$ 

tion of states representing numbers . This leaves ¡§© giving a large probability. If  is small with respect to

our machine in state

D ¤ #3

, we can use the change of variables and approx- 4:6

 imate this sum with the integral

E

¡

¥

¤ /

9

(6.16) O



6

g£

4 4:6<;



g

1

£

=?>

/ ¨AT¥H?I ¡§© DB  D



(6.22)

d

>

As in the algorithm for discrete log, we will not write ,  ,

¡ ¡

¡T or in the state of our machine, because we never change ¡§© V

If  , this quantity can be shown to be asymp-

£

 

H ¡

these values. totically bounded below by ¨ , and thus at least



 6¨! L"%$  Next, we compute . Since we keep and ¥20¡ . The exact probabilities as given by Equation (6.21)

on the tape, this can be done reversibly. This leaves our ma- for an example case are plotted in Figure 1.

 6¨ L"%$ £G

chine in the state The probability of seeing a given state §. 

will thus be at least ¥20¡ if

4:6



E

¡

¥

£ ¡ ¡

4. ¨! #"%$ E¤\/

(6.17)

6

g£

. ¡§©

V V

(6.23)

T T

=?>

7 in polynomial time on a . This leaves the 5 A note on precision machine in state

The number of bits of precision needed in the ampli-

Y54



E ?

tude of quantum mechanical computers could be a barrier

6

4



£

£

9

G %

¨! #"%$&¢4 / ¨& §C!W#3 §.)4.<¨ 

/

54:6<; Y¥476

Y to practicality. The generally accepted theoretical divid-

I

'

&% % % ( =?> / ing line between feasible and infeasible is that polynomial (4.13) precision (i.e., a number of bits logarithmic in the problem We now compute the probability that the computation ends

¡ size) is feasible and that more is infeasible. This is because

'¢5¤ ¢+¦ ¨@ V¨! #"$ ¢4 with the machine in state §.)4. with . on a quantum computer the phase angle would need to be This probability is the absolute value of the square of the obtained through some physical device, and constructing sum over all ways the machine could produce this state, or such devices with better than polynomial precision seems

unquestionably impractical. In fact, even polynomial pre-



cision may prove to be impractical; however, using this as

E

?

¥

@Z

£ the theoretical dividing line results in nice theoretical prop-

¨ §"!©#3 \% .

54:6

Y (4.14)



¨¢ £ ¥ 

 erties.

)M   We thus need to show that the computations in the pre-

vious section need to use only polynomial precision in the

4.£# £ ¡(# ¦ where the sum is over all satisfying

* amplitudes. The very act of writing down the expression

£

L"%$ ¢\£¥

¨ . This conditionarises from the fact that com-

¥H?I2 § ¨¢ £ ¥ ¨AT seems to imply that we need exponen-

putational paths can only interfere when they give the same tial precision, as this phase angle is exponentially precise.

4

¢ ¦ ¨! ¦ ¨ ¨ L"%$ ¢Z 1'/ . We now substitute the equation

* Fortunately, this is not the case. Consider the same ma-

£

!,¡(# ¨! L"%$ ¢ £8¥

©¦ in the above exponential. The

Y54:6 ¨-T5H?I §3¨¢ £ ¥© trix " with every term replaced by

above sum then reduces to £

¥H?I2 § ¨¢9£¥©  H?I T2C

¨AT . Each positive case, i.e., one

B¦ £ ¡§

resulting in  , will still occur with nearly as large



Y54



¢£,¥ E

? probability as before; instead of adding amplitudes

¥ *



£ '

%

¨ §P!b#¨- ! ¡§3 /

Y54:6

£ ¥ (4.15) ¢

 which have exactly the same phase angle, we add am-

¨ ¢£'¥

/ =?> plitudes which have nearly the same phase angle, and thus

the size of the sum will only be reduced by a constant fac-

]! ¡§¡ ¦ C ¨ L"%$ ¢ £ ¥

 ¦ £ ¡§

However, if the above sum is over tor. The algorithm will thus give a ¨A§.) with with £¢¥¤

a set of ¨¢£,¥ roots of unity evenly spaced around the constant probability (instead of probability 1).

©¦ £ ¡§

Y54:6

unit circle, and thus the probability is 0. If the Recall that we obtain the matrix " by multiplying at

¢ £'¥

"\¢ " " Y5476

above sum is over the same root of unity times, giv-  9

? most matrices . Further, each entry in is the

'

Y54:6<;

=

g £ 

¨¢*£ ¥© §¦ ¥¨ ¢£ ¥

"\¢

ing , so the probabilityis . We product of at most  terms. Suppose that each phase an-

) "  ¢ "

can check that these probabilities add up to one by count- gle were off by at most in the  's. Then in the

¡

=



¨ ¢ £ ¥© § .3£ ¡§.£¢¥¤

ing that there are states since there product, each phase angle would be off by at most  , which

§ ¨ L"%$ ¢£@¥© ¢£@¥

are ¢£,¥ choices of and choices of is enough to perform the computation with constant proba-

¨ ¦ C ¨ L"%$ ¢Z

¢ . bility of success. A similar argument shows that the mag-

" 

¨! L"%$ ¢ £ ¥©

Our computation thus produces a random § nitude of the amplitudes in the can be off by a polyno-

=

¦,£ ¡§ ¨! #"$ ¢ £¥ § ¢ £ ¥ and the corresponding  . If and mial fraction. Similar arguments hold for the general case

are relatively prime, we can ®nd ¡ by division. Because of discrete log and for factoring to show that we need only

we are choosing among all possible § 's with equal proba- polynomial precision for the amplitudes in these cases as

¢ £8¥

bility, the chance that § and are relatively prime is well.

"



¨¢ £8¥ N¨ ¢ £8¥©  

 , where is the Euler -function. It is We still need to show how to construct from con-

=

¨¢#£;¥© ¨ ¢#£;¥ 95¦¥© " ¨¢4

easy to check that  . (Actu- stant size unitary matrices having limited precision. The ar-

© ©   ¨ ¢ £;¥© ¨¢£;¥© 

ally, from [17, Theorem 328],  guments are much the same as above, but we will not give

4

 "  "  ¢

¦ .) Thus we only need a number of experi- them in this paper because, in fact, Bennett et al. [4] have

"\¢ ¡ ments that is polynomial in  to obtain with high prob- shown that it is suf®cient to use polynomial precision for

ability. In fact, we can ®nd a set of § 's such that at least one any computation on a quantum to obtain

is relatively prime to every prime divisor of ¢ £8¥ by re- the answer with high probability. peating the experiment only an expected constant number Since precision could easily be the limiting factor for of times. This would also give us enough information to practicality of quantum computation, it might be advis-

obtain ¡ . able to investigate how much precision is actually needed

6

# ¤¡ ?63 6 !¢ § ¤¤£63 6!¥£ T"!0#!%$"! 1"! 

 , and . Note the asymmetry in Proof: To ®nd such a , multiply the primes

# § ¥¥&!'!(! ¢

the de®nitions of , and . until the product is larger than . Now, if this

§ T

We can now de®ne ¦ and : product is larger than , divide it by the largest prime that

¨ keeps the number larger than . This produces the desired

6

C W ¤©

if 

9

) T*)

. There is always a prime between and [17, The-

6

6<;



O 2O

¦ ¨& 4.'# ¤

O ¤

3 3

< otherwise,

T T

V



I I I

O orem 418], so . The prime number theorem

I (3.7) [17, Theorem 6] and some calculation show that the largest

¨ " and prime dividing is of size .

¨ Note that if we are using Coppersmith's transformation

C ¤¤£

* 

if 





6

 4



O O O O

" T5 ¤;T ¤

§#¨&# .B§3 ¤

 using the th roots of unity, we set where

O ¤

otherwise. +

-,



I

 "  ! ¥

I . 

I (3.8)

#¨& 4.B§ ¤¦ ¨& 4.'# §#¨&# .B§3 # ¤

It is easy to see that ¦§ where 4 Discrete log: the easy case

6 6

W !£ W ¤ ¤£

   

 since we need and to ensure

¨& 4.'# §#¨ #.)§3 non-zero entries in ¦ and . Now,

The discrete log problem is: given a prime ¢ , a generator

9 

¨ ¨! #"%$&¢4 #¨ L"%$ ¢Z

6\;  4

 

2O O O O 2O O

6 of the multiplicative group and an ,

3 3 3 O ¤ O¥¤

¦§#¨ Z.)§3 ¤

<

I I I I

 

¡ ¨ ¦ :¨! #"%$&¢4

1 O

®nd an such that . We will start by giv-

I I

  

I  

6 O O

O ing a polynomial-time algorithm for discrete log on a quan-

3 3 O ¤

¤

< < <



I I I I I

¢ £;¥

I 9

9 tum computer in the case that is smooth. This algo-

;   ;

 

6

O O O

3 3 O ¤

¤

< <



I I

I rithm is analogous to the algorithm in Simon's paper [29],

I

'

6

Y54:6

X

with the group X replaced by . The smooth case is 

O ¤

¤ (3.9)



I not in itself an interesting accomplishment, since there are

already polynomial time algorithms for classical comput-

¦§#¨& 4.)§3 ¤ " ¨ 4.B§3 so  . We will now sketch how to rearrange the rows and ers in this case [25]; however, explaining this case is easier

than explaining either the general case of discrete log or the

"  ¦  ¦ O

columns of to get the matrix  . The matrix can factoring algorithm, and as the three algorithms are similar, be put in block-diagonal form whereI the blocks are indexed

this example will illuminate how the more complicated al-

6 6

W ¤ W ¤  by  (since all entries with are 0). Let

gorithms work.

6

£§!,¥ ¦ DB ¨! #"%$  W ¤ 

 . Within a given block , the

¨ ¢ entries look like We will start our algorithm with  , and on the tape

(i.e., in the of our machine). We are try-

9



6\;



O 2O

¡ ¨ ¦ :¨! #"$ ¢4

1

3 3

6

¦ ¨& 4.'# ¤

< ing to compute such that . Since we will

I I I

£

¨ ¢

never delete them,  , , and are constants, and we will

6 6

¤ ¨AT¥H?I ¨-W ! DB 2 

 

 specify a state of our machine by the other contents of the

£

¤ ¨AT¥H?I ¨-W 6"!+W DB  6 3/   (3.10) tape.

Thus, if we rearrange the rows within this block so that The algorithm starts out by ªchoosingº numbers and

¨ L"%$ ¢ £'¥

# uniformly, so the state of the machine after

W ¦ W 6 ! W D ¨! L"%$ 6 ( they are indexed by  , we

this step is

W # (

obtain the transformation  with amplitude

Y54 Y54

 

6

E E

£

¡

¥

O ¤

¨-T5H?I!W  6B (

 ; that is, the transformation given



4.£#C¤4/

O (4.11)

I

¢£'¥

¨AW . (

by the unitary matrix with the  entry equal to

=?>

/ =?>

6

£

4

O ¤

6

"  § ¨-T5H?I!W 

O (

 , which is . The matrix can

¨  ¨! #"$ ¢Z /

 The algorithm next computes reversibly,

O

I

#

" 

 so we must keep the values and on the tape. The state 

similarly be rearranged to obtain the matrix O . I

We also need to show how to ®nd a smooth that lies of the machine is now

Y54 Y54

T 

between and in polynomial time. There are actually 

E E

¥

4

/

4.'# .<¨  £G / smooth much closer to than this, but this is all we need. ¨! #"%$&¢4 (4.12)

It is not known how to ®nd smooth numbers very close to ¢£'¥

=?> / =?>

in polynomial time.

Y5476

What we do now is use the transformation " to map

6

£

9

O¥¤

# § ¨-T5H?I §3¨¢ £8¥

with amplitude and ¥476<;

Lemma 3.2 Given , there is a polynomial-timealgorithm Y

I

6

@

9

O¥¤

##  ¨-T¥H?I2#3¨ ¢£8¥ N

T T

V with amplitude . As 5476\;

to ®nd a number with such that no prime Y

I

 "  §

power larger than § divides , for some constant in- was discussed in the previous section, this is a unitarytrans-

£L¥ dependent of . formation, and since ¢ is smooth it can be accomplished

5

¤ 56B ¡ !(!'!N

From results on reversible computation [3, 19, 31], If we know a factorization  where

*

? ?

F_ $4¨- .) £¢ ¤ ¥

we can compute any polynomial time function  and where and all of the are of poly-

¨& 

 as long as we keep the input, , on the ma- nomial size we will show how to build the transformation

 ¨&  "  " 

chine. To erase and replace it with we need in polynomial time by composing the . For this, we

=

in addition that  is one-to-one and that is com- ®rst need a lemma on quantum computation.

¨& 

putable in polynomial time from  ; i.e., that 4:6

Lemma 3.1 Suppose the matrix ¤ is a block-diagonal 

both  and are polynomial-time computable.

¦¥ )

) unitary matrix composed of identical unitary

¥

) ) ¤

Fact 2: Any polynomial size unitary matrix can be ap- matrices ( along the diagonal and 0's everywhere ¤

proximated using a polynomial number of ele- else. Suppose further that the state transformation ( can

N ¨§¤

mentary unitary transformations [10, 5, 33] and be done in time ( on a . Then

'

N ¨§¤ ?! ¨ "  ) thus can be approximated in polynomial time on a ¤ the matrix can be done in ( time on a

quantum computer. Further, this approximation is quantum Turing machine, where § is a constant.

good enough so as to introduce at most a bounded

¤ ¤

probability of error into the results of the compu- We will call this matrix the direct sum of copies of (

¤ ¤  ¤ ¤

tation. and use the notation ( . This matrix is the

§

©¥

¤ ¨ ¨

tensor product of ( and , where is the identity § 3 Building unitary transformations matrix. §

Proof: Suppose that we have a number on our tape. We

Since quantum computation deals with unitary transfor-

6

W W ¤

can reversibly compute and  from where

mations, it is helpful to be able to build certain useful uni-

6

)LW !,W

 . This computation erases from our tape and

tary transformations. In this section we give some tech-

6 6

W W W

replaces it with and  . Now tells in which block

niques for constructing unitary transformations on quan-

W

the row is contained, and  tells which row of the ma-

tum machines, which will result in our showing how to

¤ trix within that block is the row . We can then apply ( to

construct one particular unitary transformation in polyno-

W W

   to obtain (erasing in the process). Now, combin-

mial time. These transformations will generally be given

W 6 # ¤ )#W 6]!©  ing and  to obtain gives the result of

as matrices, with both rows and columns indexed by states. ¤ applied to (with the right amplitude). The computation

These states will correspond to representations of integers

¤ N ¨ ¤ ( of ( takes time, and the rest of the computation is

on the computer; in particular, the rows and columns will

" ) !  "  polynomial in  .

be indexed beginning with 0 unless otherwise speci®ed.

V

A tool we will use repeatedly in this paper is the follow- We now show how to obtain " for smooth . We  ing unitary transformation, the summation of which gives will decompose " into a product of a polynomial num-

ber of unitary transformations, all of which are performable C UT;

a Fourier transform. Consider a number with V 9

in polynomial time; this enables us to construct " in

for some where the number of bits of is polynomial. We

¡

6

¤ 

will perform the transformationthat takes the state ¤ to the polynomial time. Suppose that we have with

6

F_ $4¨- .) ¤U¥ "  ¤ ¦§

state  . What we will do is represent ,

476

 §

E where by rearranging the rows and columns of we ob-

¡

¥

£

¨-T¥H?I2 #3  /

#C¤ (3.6) "   ¦

6

 O

g' tain and rearranging the rows and columns of

I



" 

O / =?>

we obtain  . As long as these rearrangements of the

¨& 4.£#3 I §

That is, we apply the unitary matrix whose 'th entry rows and columns of ¦ and are performable in polyno-

6

£

O¥¤

¨AT¥H?I2 F#   is . This transformation is at the heart of ¡

 mial time (i.e., given row , we can ®nd in polynomial time

I

" ¡ 

our algorithms, and we will call this matrix . Since we the row ( to which it is taken) and the inverse operations

" will use  for of exponential size, we must show how are also performable in polynomial time, then by using the

this transformation can be done in polynomial time. In fact, lemma above and recursion we can obtain a polynomial-

" 

we will only be able to do this for smooth numbers , that time way to perform on a quantum computer.

§ "V ¤

is, ones with small prime factors. In this paper, we will deal We now need to de®ne ¦ and and check that

¦§ ¦ §

with smooth numbers which contain no prime power fac- . To de®ne and we need some preliminary def-

'

6 6

¨  " &  § ¤ 

tor that is larger than for some ®xed . It is also initions. Recall that  with and relatively

£

¨-T5H?I!2  £ ¨! L"%$ 

possible to do this transformation in polynomial time for all prime. Let ¤ . Let be the number

6

£B¦UCV¨! L"%$ £M¦ £V¥ ¨! #"$

smooth numbers ; Coppersmith shows how to do this for such that and  . Such a

¤'T¥ using what is essentially the fast Fourier transform, number exists by the Chinese remainder theorem, and can

and that this substantially reduces the number of operations be computed in polynomial time. We will decompose row

# § ¤ W,6 ! W  required to factor [8]. and column indices , and as follows:  ,

4

¡

¡ ¡ ¡



? ?

?

¥ ¤ toring, is in use. We show that these problems can be solved ¤ and each is a basis state of the machine; in BQP. in a quantum Turing machine, a basis state is de®ned by Currently, nobody knows how to build a quantum com- what is written on the tape and by the position and state puter, although it seems as though it could be possible of the head. In a a basis state is de®ned within the laws of . Some suggestions by the values of the signals on all the wires at some level

have been made as to possible designs for such comput- of the circuit. If the machine is examined at a particular

¡ ¡ ¡



? ? ers [30, 22, 23, 12], but there will be substantial dif®culty step, the probability of seeing basis state ¤ is ; how- in building any of these [18, 32]. Even if it is possible ever, by the Heisenberg , lookingat the to build small quantum computers, scaling up to machines machine during the computation will disturb the rest of the large enough to do interesting computations could present computation. fundamental dif®culties. It is hoped that this paper will The laws of quantum mechanics only permit unitary stimulate research on whether it is feasible to actually con- transformations of the state. A unitary matrix is one whose struct a quantum computer. conjugate transpose is equal to its inverse, and requiring Even if no quantum computer is ever built, this research state transformations to be represented by unitary matrices does illuminate the problem of simulating quantum me- ensures that the probabilities of obtaining all possible out- chanics on a classical computer. Any method of doing this comes will add up to one. Further, the de®nitions of quan- for an arbitrary Hamiltonian would necessarily be able to tum Turing machine and quantum circuit only allow local simulate a quantum computer. Thus, any general method unitary transformations, that is, unitary transformations on for simulating quantum mechanics with at most a polyno- a ®xed number of bits. mial slowdown would lead to a polynomial algorithm for Perhaps an example will be informative at this point.

factoring. Suppose our machine is in the superposition of states

?

¡ ¡ ¡

6 6

¢

¥¤O! ¥C2C¥¤ £ ¥2¥C5¤

C2CC (2.2)

  2 Quantum computation  and we apply the unitary transformation

In this section we will give a brief introduction to quan-

C%¥ ¥C ¥¥

tum computation, emphasizing the properties that we will CC

6 6 6

use. For a more complete overview I refer the reader to Si- 6

CC

   

? ? 6

mon's paper in this proceedings [29] or to earlier papers on 6

C%¥ £

£ (2.3)

   

6 6 6

quantum computational complexity theory [5, 33]. 6

¥C £ £

    ?

In quantum physics, an experiment behaves as if it pro- ?

6 6

¥¥ £ £

   ceeds down all possible paths simultaneously. Each of  these paths has a complex deter- to the last two bits of our state. That is, we multiply the mined by the physics of the experiment. The probability of last two bits of the components of the vector (2.2) by the any particular outcome of the experiment is proportional to matrix (2.3). The machine will then go to the superposition the square of the absolute value of the sum of the ampli- of states

tudes of all the paths leading to that outcome. In order to

?

¡ ¡ ¡ ¡ ¡ ¡

6 6

¢

CC2C¥¤O! CC>¥X¤?! C>¥C5¤?! C%¥2¥X¤A ¥! ¥©C%¥X¤(! ¥2¥¥X¤4/

sum over a set of paths, the outcomes have to be identical ¨

 

  in all respects, i.e., the universe must be in the same state. (2.4) A quantum computer behaves in much the same way. The Notice that the result would have been different had we

computation proceeds down all possible paths at once, and started with the superposition of states

each path has associated with it a complex amplitude. To ?

¡ ¡ ¡

6 6

¢

¥©CC¥¤O! ¥¥©C¥¤4. ¥¤?!

C2CC (2.5) 

determine the probability of any ®nal state of the machine,   we add the amplitudes of all the paths which reach that ®nal state, and then square the absolute value of this sum. which has the same probabilities of being in any particular An equivalent way of looking at this process is to imag- con®guration if it is observed. ine that the machine is in some superposition of states at We now give certain properties of quantum computation every step of the computation. We will represent this su- that will be useful. These facts are not immediately appar- perposition of states as ent from the de®nitionof quantum Turing machine or quan-

tum circuit, and they are very useful for constructing algo-

E

¡

? ?

¤ .

(2.1) rithms for quantum machines. ?

Fact 1: A deterministic computation is performable on a ? where the amplitudes are complex numbers such that quantum computer if and only if it is reversible.

3 of exponential search problems. These are problems which ®ned and a universal quantum Turing machine was given may require the search of an exponential size space to ®nd which could simulate any other quantum Turing machine the solution, but for which the solution, once found, may of this class. Unfortunately, it was not clear whether these be veri®ed in polynomial time (possibly with a polynomial quantum Turing machines could simulate other classes of amount of additional supporting evidence). We will also quantum Turing machines, so this result was not entirely discuss two other traditional complexity classes. One is satisfactory. Yao [33] has remedied the situation by show- BPP, which are problems which can be solved with high ing that quantum Turing machines can simulate, and be

probability in polynomial time, given access to a random simulated by, uniform families of polynomial size quantum ¡

number generator. The other is P , which are those prob- circuits, with at most polynomialslowdown. He has further lems which could be solved in polynomial time if sums de®ned quantum Turing machines with * heads and showed

of exponentially many terms could be computed ef®ciently that these machines can be simulated with slowdown of a 5 (where these sums must satisfy the requirement that each factor of T . This seems to show that the class of problems

term is computable in polynomial time). These classes are which can be solved in polynomial time on one of these

T ¥20

related as follows: machines, possibly with a bounded probability 

¡

¢¤£ ¢ £¦¢¨ ¢

¢¤£¦¥§¢¨¢ of error, is reasonably robust. This class is called BQP in

© / . analogy to the classical BPP, which are The relationship of BPP and NP is not known. those problems which can be solved with a bounded proba- The question of whether using quantum mechanics in a bility of error on a probabilistic Turing machine. This class computer allows one to obtain more computational power BQP could be considered the class of problems that are ef- ®ciently solvable on a quantum Turing machine.

has not yet been satisfactorily answered. This question ¡

£ £ was addressed in [11, 6, 7], but it was not shown how to Since BQP P PSPACE [5], any non-relativized solve any problem in quantum polynomialtime that was not proof that BQP is strictly larger than BPP would imply the known to be solvable in BPP. Recent work on this problem structural complexity result BPP  PSPACE which is not

was stimulated by Bernstein and Vazirani's paper [5] which yet proven. In view of this dif®culty, several approaches £ laid the foundations of the quantum computation theory of come to mind; one is showing that BQP BPP would lead computational complexity. One of the results contained in to a collapse of classical complexity classes which are be- this paper was an oracle problem (a problem involving a lieved to be different. A second approach is to prove re- ªblack boxº subroutine, i.e., a function that the computer sults relative to an oracle. In Bennett et al. [4] it is shown

is allowed to perform, but for which no code is accessible.) that relative to a random oracle, it is not the case that NP £ which can be done in polynomial time on a quantum Turing BQP. This proof in fact suggests that a quantum com- machine and requires super-polynomial time on a classical puter cannot invert one-way functions, but only proves this computer. This was the ®rst indication, other than the fact for one-way oracles, i.e. ªblack boxº functions given as a that nobody knew how to simulate a quantum computer on subroutine which the quantum computer is not allowed to a classical computer withoutan exponential slowdown, that look inside. Such oracle results have been misleading in the quantum computation might obtain a greater than polyno- past, most notably in the case of IP ¤ PSPACE [15, 28]. mial speedup over classical computation augmented with A third approach, which we take, is to solve in BQP some a random number generator. This result was improved by well-studied problem for which no polynomial time algo- Simon [29], who gave a much simpler construction of an rithm is known. This shows that the extra power conferred oracle problem which takes polynomial time on a quantum by quantum interference is at least hard to achieve using computer and requires exponential time on a classical com- classical computation. Both Bernstein and Vazirani [5] and puter. Indeed, by viewing Simon's oracle as a subroutine, Simon [29] also gave polynomial time algorithms for prob- this result becomes a promise problem which takes polyno- lems which were not known to be in BPP, but these prob- mial time on a quantum computer and looks as if it would lems were invented especially for this purpose, although Si- be very dif®cult on a classical computer (a promise problem mon's problem does not appear contrived and could con- is one where the input is guaranteed to satisfy some con- ceivably be useful. dition). The algorithm for the ªeasy caseº of discrete log Discrete logarithms and inte-

given in this paper is directly analogous to Simon's algo- ger factoring are two number-theory problems which have

Y¥476

: X

rithm with the group X replaced by the group ; I was been studied extensively but for which no polynomial-time  only able to discover this algorithm after seeing Simon's algorithms are known [16, 20, 21, 26]. In fact, these prob- paper. lems are so widely believed to be hard that cryptosystems In another result in Bernstein and Vazirani's paper, a par- based on their hardness have been proposed, and the RSA ticular class of quantum Turing machine was rigorouslyde- public key cryptosystem [27], based on the hardness of fac-

2 In: Proceedings, 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, November 20±22, 1994, IEEE Computer Society Press, pp. 124±134.

Algorithms for Quantum Computation: Discrete Logarithms and Factoring

Peter W. Shor AT&T Bell Labs Room 2D-149 600 Mountain Ave. Murray Hill, NJ 07974, USA

Abstract tum mechanics and computing was done by Benioff [1, 2]. Although he did not ask whether quantum mechanics con- A computer is generally considered to be a universal ferred extra power to computation, he did show that a Tur- computational device; i.e., it is believed able to simulate ing machine could be simulated by the reversible unitary any physical computational device with a increase in com- of a quantum process, which is a necessary pre- putation time of at most a polynomial factor. It is not requisite for quantum computation. Deutsch [9, 10] was the clear whether this is still true when quantum mechanics is ®rst to give an explicit model of quantum computation. He taken into consideration. Several researchers, starting with de®ned both quantum Turing machines and quantum cir- , have developed models for quantum me- cuits and investigated some of their properties. chanical computers and have investigated their computa- The next part of this paper discusses how quantum com- tional properties. This paper gives Las Vegas algorithms putation relates to classical complexity classes. We will for ®nding discrete logarithms and factoring integers on thus ®rst give a brief intuitive discussion of complexity a quantum computer that take a number of steps which is classes for those readers who do not have this background. polynomial in the input size, e.g., the number of digits of There are generally two resources which limit the ability the integer to be factored. These two problems are gener- of computers to solve large problems: time and space (i.e., ally considered hard on a classical computer and have been memory). The ®eld of analysis of algorithms considers used as the basis of several proposed cryptosystems. (We the asymptotic demands that algorithms make for these re- thus give the ®rst examples of quantum cryptanalysis.) sources as a function of the problem size. Theoretical com- puter scientists generally classify algorithms as ef®cient when the number of steps of the algorithms grows as a poly- 1 Introduction nomial in the size of the input. The class of problems which can be solved by ef®cient algorithms is known as P. This Since the discovery of quantum mechanics, people have classi®cation has several nice properties. For one thing, it found the behavior of the laws of probability in quan- does a reasonable job of re¯ecting the performance of al- tum mechanics counterintuitive. Because of this behavior, gorithms in practice (although an algorithm whose running quantum mechanical phenomena behave quite differently time is the tenth power of the input size, say, is not truly than the phenomena of that we are used ef®cient). For another, this classi®cation is nice theoreti- to. Feynman seems to have been the ®rst to ask what ef- cally, as different reasonable machine models produce the fect this has on computation [13, 14]. He gave arguments same class P. We will see this behavior reappear in quan- as to why this behavior might make it intrinsically compu- tum computation, where different models for quantum ma- tationally expensive to simulate quantum mechanics on a chines will vary in running times by no more than polyno- classical (or von Neumann) computer. He also suggested mial factors. the possibility of using a computer based on quantum me- There are also other computational complexity classes chanical principles to avoid this problem, thus implicitly discussed in this paper. One of these is PSPACE, which asking the converse question: by using quantum mechan- are those problems which can be solved with an amount ics in a computer can you compute more ef®ciently than on of memory polynomial in the input size. Another impor- a classical computer. Other early work in the ®eld of quan- tant complexity class is NP, which intuitively is the class

1