Quantum Singular Value Transformation & Its Algorithmic Applications
András Gilyén
Institute for Quantum Information and Matter
The Quantum Wave in Computing Boot Camp Berkeley, 28th January 2020 Quantum walks Continuous-time walks Evolution of the state: d X p (t) = L p (t) =⇒ p(t) = etL p(0) dt u uv v v∈V
d X i ψ (t) = L ψ (t) =⇒ ψ(t) = e−itL ψ(0) dt u uv v v∈V
Continuous-time quantum / random walks
Laplacian of a weighted graph
Let G = (V, E) be a finite simple graph, with non-negative edge-weights w : E → R+. The Laplacian is defined as X u , v : Luv = wuv , and Luu = − wuv . v
1 / 24 d X i ψ (t) = L ψ (t) =⇒ ψ(t) = e−itL ψ(0) dt u uv v v∈V
Continuous-time quantum / random walks
Laplacian of a weighted graph
Let G = (V, E) be a finite simple graph, with non-negative edge-weights w : E → R+. The Laplacian is defined as X u , v : Luv = wuv , and Luu = − wuv . v
Continuous-time walks Evolution of the state: d X p (t) = L p (t) =⇒ p(t) = etL p(0) dt u uv v v∈V
1 / 24 Continuous-time quantum / random walks
Laplacian of a weighted graph
Let G = (V, E) be a finite simple graph, with non-negative edge-weights w : E → R+. The Laplacian is defined as X u , v : Luv = wuv , and Luu = − wuv . v
Continuous-time walks Evolution of the state: d X p (t) = L p (t) =⇒ p(t) = etL p(0) dt u uv v v∈V
d X i ψ (t) = L ψ (t) =⇒ ψ(t) = e−itL ψ(0) dt u uv v v∈V
1 / 24 Discrete-time quantum / random walks
Discrete-time Markov-chain on a weighted graph Transition probability in one step (stochastic matrix)
wvu Pvu = Pr(step to v | being at u) = P v0∈U wv0u
2 / 24 Discrete-time quantum / random walks
Discrete-time Markov-chain on a weighted graph Transition probability in one step (stochastic matrix)
wvu Pvu = Pr(step to v | being at u) = P v0∈U wv0u
A unitary implementing the update X p U : |0i|ui 7→ Pvu|vi|ui v∈V
2 / 24 Discrete-time quantum / random walks
Discrete-time Markov-chain on a weighted graph Transition probability in one step (stochastic matrix)
wvu Pvu = Pr(step to v | being at u) = P v0∈U wv0u
A unitary implementing the update X p U : |0i|ui 7→ Pvu|vi|ui v∈V
How to erase history? The Szegedy quantum walk operator:
W 0 := U† · SWAP · U W := U† · SWAP · U((2|0ih0| ⊗ I) − I)
2 / 24 A block-encoding of the Markov chain: (h0| ⊗ I)W 0(|0i ⊗ I) = P Proof: † 0 † X p 0 X p 0 h |h | | i| i = h |h | · · | i| i = 0 | i| i 0 | i| i 0 u W 0 v 0 u U SWAP U 0 v Pv u v u SWAP Pu v u v v0∈V u0∈V
k Multiple steps of the quantum walk: (h0| ⊗ I)W (|0i ⊗ I) = Tk (P)
[Tk (x) = cos(k arccos(x)) Chebyshev polynomials: Tk+1(x) = 2xTk (x) − Tk−1(x)]
Proof: Proceed by induction, observe T0(P) = I X, T1(P) = P X (h0| ⊗ I)W k+1(|0i ⊗ I) = (h0| ⊗ I)W 0((2|0ih0| ⊗ I) − I)W k (|0i ⊗ I) = = (h0| ⊗ I)W 0(2|0i h0| ⊗ I)W k (|0i ⊗ I) − (h0| ⊗ I)W k−1(|0i ⊗ I) | {z } | {z } | {z } 2P Tk (P) Tk−1(P)
Understanding Szegedy’s quantum walk operator
For simplicity let us assume Puv = Pvu, i.e., the total weight of vertices is constant.
3 / 24 Proof: † 0 † X p 0 X p 0 h |h | | i| i = h |h | · · | i| i = 0 | i| i 0 | i| i 0 u W 0 v 0 u U SWAP U 0 v Pv u v u SWAP Pu v u v v0∈V u0∈V
k Multiple steps of the quantum walk: (h0| ⊗ I)W (|0i ⊗ I) = Tk (P)
[Tk (x) = cos(k arccos(x)) Chebyshev polynomials: Tk+1(x) = 2xTk (x) − Tk−1(x)]
Proof: Proceed by induction, observe T0(P) = I X, T1(P) = P X (h0| ⊗ I)W k+1(|0i ⊗ I) = (h0| ⊗ I)W 0((2|0ih0| ⊗ I) − I)W k (|0i ⊗ I) = = (h0| ⊗ I)W 0(2|0i h0| ⊗ I)W k (|0i ⊗ I) − (h0| ⊗ I)W k−1(|0i ⊗ I) | {z } | {z } | {z } 2P Tk (P) Tk−1(P)
Understanding Szegedy’s quantum walk operator
For simplicity let us assume Puv = Pvu, i.e., the total weight of vertices is constant. A block-encoding of the Markov chain: (h0| ⊗ I)W 0(|0i ⊗ I) = P
3 / 24 k Multiple steps of the quantum walk: (h0| ⊗ I)W (|0i ⊗ I) = Tk (P)
[Tk (x) = cos(k arccos(x)) Chebyshev polynomials: Tk+1(x) = 2xTk (x) − Tk−1(x)]
Proof: Proceed by induction, observe T0(P) = I X, T1(P) = P X (h0| ⊗ I)W k+1(|0i ⊗ I) = (h0| ⊗ I)W 0((2|0ih0| ⊗ I) − I)W k (|0i ⊗ I) = = (h0| ⊗ I)W 0(2|0i h0| ⊗ I)W k (|0i ⊗ I) − (h0| ⊗ I)W k−1(|0i ⊗ I) | {z } | {z } | {z } 2P Tk (P) Tk−1(P)
Understanding Szegedy’s quantum walk operator
For simplicity let us assume Puv = Pvu, i.e., the total weight of vertices is constant. A block-encoding of the Markov chain: (h0| ⊗ I)W 0(|0i ⊗ I) = P Proof: † 0 † X p 0 X p 0 h |h | | i| i = h |h | · · | i| i = 0 | i| i 0 | i| i 0 u W 0 v 0 u U SWAP U 0 v Pv u v u SWAP Pu v u v v0∈V u0∈V
3 / 24 Proof: Proceed by induction, observe T0(P) = I X, T1(P) = P X (h0| ⊗ I)W k+1(|0i ⊗ I) = (h0| ⊗ I)W 0((2|0ih0| ⊗ I) − I)W k (|0i ⊗ I) = = (h0| ⊗ I)W 0(2|0i h0| ⊗ I)W k (|0i ⊗ I) − (h0| ⊗ I)W k−1(|0i ⊗ I) | {z } | {z } | {z } 2P Tk (P) Tk−1(P)
Understanding Szegedy’s quantum walk operator
For simplicity let us assume Puv = Pvu, i.e., the total weight of vertices is constant. A block-encoding of the Markov chain: (h0| ⊗ I)W 0(|0i ⊗ I) = P Proof: † 0 † X p 0 X p 0 h |h | | i| i = h |h | · · | i| i = 0 | i| i 0 | i| i 0 u W 0 v 0 u U SWAP U 0 v Pv u v u SWAP Pu v u v v0∈V u0∈V
k Multiple steps of the quantum walk: (h0| ⊗ I)W (|0i ⊗ I) = Tk (P)
[Tk (x) = cos(k arccos(x)) Chebyshev polynomials: Tk+1(x) = 2xTk (x) − Tk−1(x)]
3 / 24 (h0| ⊗ I)W k+1(|0i ⊗ I) = (h0| ⊗ I)W 0((2|0ih0| ⊗ I) − I)W k (|0i ⊗ I) = = (h0| ⊗ I)W 0(2|0i h0| ⊗ I)W k (|0i ⊗ I) − (h0| ⊗ I)W k−1(|0i ⊗ I) | {z } | {z } | {z } 2P Tk (P) Tk−1(P)
Understanding Szegedy’s quantum walk operator
For simplicity let us assume Puv = Pvu, i.e., the total weight of vertices is constant. A block-encoding of the Markov chain: (h0| ⊗ I)W 0(|0i ⊗ I) = P Proof: † 0 † X p 0 X p 0 h |h | | i| i = h |h | · · | i| i = 0 | i| i 0 | i| i 0 u W 0 v 0 u U SWAP U 0 v Pv u v u SWAP Pu v u v v0∈V u0∈V
k Multiple steps of the quantum walk: (h0| ⊗ I)W (|0i ⊗ I) = Tk (P)
[Tk (x) = cos(k arccos(x)) Chebyshev polynomials: Tk+1(x) = 2xTk (x) − Tk−1(x)]
Proof: Proceed by induction, observe T0(P) = I X, T1(P) = P X
3 / 24 Understanding Szegedy’s quantum walk operator
For simplicity let us assume Puv = Pvu, i.e., the total weight of vertices is constant. A block-encoding of the Markov chain: (h0| ⊗ I)W 0(|0i ⊗ I) = P Proof: † 0 † X p 0 X p 0 h |h | | i| i = h |h | · · | i| i = 0 | i| i 0 | i| i 0 u W 0 v 0 u U SWAP U 0 v Pv u v u SWAP Pu v u v v0∈V u0∈V
k Multiple steps of the quantum walk: (h0| ⊗ I)W (|0i ⊗ I) = Tk (P)
[Tk (x) = cos(k arccos(x)) Chebyshev polynomials: Tk+1(x) = 2xTk (x) − Tk−1(x)]
Proof: Proceed by induction, observe T0(P) = I X, T1(P) = P X (h0| ⊗ I)W k+1(|0i ⊗ I) = (h0| ⊗ I)W 0((2|0ih0| ⊗ I) − I)W k (|0i ⊗ I) = = (h0| ⊗ I)W 0(2|0i h0| ⊗ I)W k (|0i ⊗ I) − (h0| ⊗ I)W k−1(|0i ⊗ I) | {z } | {z } | {z } 2P Tk (P) Tk−1(P)
3 / 24 Understanding Szegedy’s quantum walk operator
For simplicity let us assume Puv = Pvu, i.e., the total weight of vertices is constant. A block-encoding of the Markov chain: (h0| ⊗ I)W 0(|0i ⊗ I) = P Proof: † 0 † X p 0 X p 0 h |h | | i| i = h |h | · · | i| i = 0 | i| i 0 | i| i 0 u W 0 v 0 u U SWAP U 0 v Pv u v u SWAP Pu v u v v0∈V u0∈V
k Multiple steps of the quantum walk: (h0| ⊗ I)W (|0i ⊗ I) = Tk (P)
[Tk (x) = cos(k arccos(x)) Chebyshev polynomials: Tk+1(x) = 2xTk (x) − Tk−1(x)]
Proof: Proceed by induction, observe T0(P) = I X, T1(P) = P X (h0| ⊗ I)W k+1(|0i ⊗ I) = (h0| ⊗ I)W 0((2|0ih0| ⊗ I) − I)W k (|0i ⊗ I) = = (h0| ⊗ I)W 0(2|0i h0| ⊗ I)W k (|0i ⊗ I) − (h0| ⊗ I)W k−1(|0i ⊗ I) | {z } | {z } | {z } 2P Tk (P) Tk−1(P)
3 / 24 |0i • ... 0 |0i • ... † 0 Q Q X † . .. k (h0|Q ⊗I)V(Q|0i⊗I) = . . = qk U | i ... • 0 0 k − 2 ... 2n 1 U U U
P k P In particular the top-left corner of k qk W is i qk Tk (P). Corollary: Quantum fast-forwarding (Apers & Sarlette 2018) We can implement a unitary V such that ε (h0| ⊗ I)V(|0i ⊗ I) ≈ Pt √ p t P≈ t with using only O t log(1/ε) quantum walk steps. (Proof: x ≈ k=0 Tk (x))
Are we happy with Chebyshev polynomials? Linear combination of (non-)unitary mat. [Childs & Wiebe ’12, Berry et al. ’15]
P k P √ Suppose that V = k |kihk| ⊗ U , and Q : |0i 7→ i qi|ii for qi ∈ [0, 1], then
4 / 24 P k P In particular the top-left corner of k qk W is i qk Tk (P). Corollary: Quantum fast-forwarding (Apers & Sarlette 2018) We can implement a unitary V such that ε (h0| ⊗ I)V(|0i ⊗ I) ≈ Pt √ p t P≈ t with using only O t log(1/ε) quantum walk steps. (Proof: x ≈ k=0 Tk (x))
Are we happy with Chebyshev polynomials? Linear combination of (non-)unitary mat. [Childs & Wiebe ’12, Berry et al. ’15]
P k P √ Suppose that V = k |kihk| ⊗ U , and Q : |0i 7→ i qi|ii for qi ∈ [0, 1], then |0i • ... 0 |0i • ... † 0 Q Q X † . .. k (h0|Q ⊗I)V(Q|0i⊗I) = . . = qk U | i ... • 0 0 k − 2 ... 2n 1 U U U
4 / 24 Corollary: Quantum fast-forwarding (Apers & Sarlette 2018) We can implement a unitary V such that ε (h0| ⊗ I)V(|0i ⊗ I) ≈ Pt √ p t P≈ t with using only O t log(1/ε) quantum walk steps. (Proof: x ≈ k=0 Tk (x))
Are we happy with Chebyshev polynomials? Linear combination of (non-)unitary mat. [Childs & Wiebe ’12, Berry et al. ’15]
P k P √ Suppose that V = k |kihk| ⊗ U , and Q : |0i 7→ i qi|ii for qi ∈ [0, 1], then |0i • ... 0 |0i • ... † 0 Q Q X † . .. k (h0|Q ⊗I)V(Q|0i⊗I) = . . = qk U | i ... • 0 0 k − 2 ... 2n 1 U U U
P k P In particular the top-left corner of k qk W is i qk Tk (P).
4 / 24 √ t P≈ t (Proof: x ≈ k=0 Tk (x))
Are we happy with Chebyshev polynomials? Linear combination of (non-)unitary mat. [Childs & Wiebe ’12, Berry et al. ’15]
P k P √ Suppose that V = k |kihk| ⊗ U , and Q : |0i 7→ i qi|ii for qi ∈ [0, 1], then |0i • ... 0 |0i • ... † 0 Q Q X † . .. k (h0|Q ⊗I)V(Q|0i⊗I) = . . = qk U | i ... • 0 0 k − 2 ... 2n 1 U U U
P k P In particular the top-left corner of k qk W is i qk Tk (P). Corollary: Quantum fast-forwarding (Apers & Sarlette 2018) We can implement a unitary V such that ε (h0| ⊗ I)V(|0i ⊗ I) ≈ Pt p with using only O t log(1/ε) quantum walk steps. 4 / 24 Are we happy with Chebyshev polynomials? Linear combination of (non-)unitary mat. [Childs & Wiebe ’12, Berry et al. ’15]
P k P √ Suppose that V = k |kihk| ⊗ U , and Q : |0i 7→ i qi|ii for qi ∈ [0, 1], then |0i • ... 0 |0i • ... † 0 Q Q X † . .. k (h0|Q ⊗I)V(Q|0i⊗I) = . . = qk U | i ... • 0 0 k − 2 ... 2n 1 U U U
P k P In particular the top-left corner of k qk W is i qk Tk (P). Corollary: Quantum fast-forwarding (Apers & Sarlette 2018) We can implement a unitary V such that ε (h0| ⊗ I)V(|0i ⊗ I) ≈ Pt √ p t P≈ t with using only O t log(1/ε) quantum walk steps. (Proof: x ≈ k=0 Tk (x)) 4 / 24 Starting from arbitrary distributions Starting from distribution σ on some vertices we can p I detect marked vertices in square-root commute time O Cσ,M (Belovs 2013) p I find a marked vertex in time Oe Cσ,M (Piddock; Apers, G, Jeffery 2019)
Szegedy quantum walk based search Suppose we have some unknown marked vertices M ⊂ V. Quadratically faster hitting Hitting time: expected time to hit a marked vertex starting from the stationary distr. P √ Starting from the quantum state v∈V πv |vi we can √ I detect the presence of marked vertices (M , 0) in time O HT (Szegedy 2004) I find a marked vertex in time O √1 (Magniez, Nayak, Roland, Sántha 2006) δε √ I find a marked vertex in time Oe HT (Ambainis, G, Jeffery, Kokainis 2019)
5 / 24 Szegedy quantum walk based search Suppose we have some unknown marked vertices M ⊂ V. Quadratically faster hitting Hitting time: expected time to hit a marked vertex starting from the stationary distr. P √ Starting from the quantum state v∈V πv |vi we can √ I detect the presence of marked vertices (M , 0) in time O HT (Szegedy 2004) I find a marked vertex in time O √1 (Magniez, Nayak, Roland, Sántha 2006) δε √ I find a marked vertex in time Oe HT (Ambainis, G, Jeffery, Kokainis 2019)
Starting from arbitrary distributions Starting from distribution σ on some vertices we can p I detect marked vertices in square-root commute time O Cσ,M (Belovs 2013) p I find a marked vertex in time Oe Cσ,M (Piddock; Apers, G, Jeffery 2019)
5 / 24 Element Distinctness I Black box: Computes f on inputs corresponding to elements of [n] I Question: Are there any i , j ∈ [n] × [n] such that f(i) = f(j)? I Query complexity: O(n2/3) (Ambainis 2003) Ω(n2/3) (Aaronson & Shi 2001)
Triangle Finding [(2014) non-walk algorithm by Le Gall: Oe(n5/4)] I Black box: For any pair u, v ∈ V × V tells whether there is an edge uv I Question: Is there any triangle in G? I Query complexity: O(n13/10) (Magniez, Sántha, Szegedy 2003)
Matrix Product Verification I Black box: Tells any entry of the n × n matrices A, B or C. I Question: Does AB = C hold? I Query complexity: O(n5/3) (Buhrman, Špalek 2004)
Walks on the Johnson graph (Sántha arXiv:0808.0059 ) Vertices: {S ⊂ N : |S| = K}; Edges: {(S, S0): |S4S0| = 2}
6 / 24 Triangle Finding [(2014) non-walk algorithm by Le Gall: Oe(n5/4)] I Black box: For any pair u, v ∈ V × V tells whether there is an edge uv I Question: Is there any triangle in G? I Query complexity: O(n13/10) (Magniez, Sántha, Szegedy 2003)
Matrix Product Verification I Black box: Tells any entry of the n × n matrices A, B or C. I Question: Does AB = C hold? I Query complexity: O(n5/3) (Buhrman, Špalek 2004)
Walks on the Johnson graph (Sántha arXiv:0808.0059 ) Vertices: {S ⊂ N : |S| = K}; Edges: {(S, S0): |S4S0| = 2} Element Distinctness I Black box: Computes f on inputs corresponding to elements of [n] I Question: Are there any i , j ∈ [n] × [n] such that f(i) = f(j)? I Query complexity: O(n2/3) (Ambainis 2003) Ω(n2/3) (Aaronson & Shi 2001)
6 / 24 Matrix Product Verification I Black box: Tells any entry of the n × n matrices A, B or C. I Question: Does AB = C hold? I Query complexity: O(n5/3) (Buhrman, Špalek 2004)
Walks on the Johnson graph (Sántha arXiv:0808.0059 ) Vertices: {S ⊂ N : |S| = K}; Edges: {(S, S0): |S4S0| = 2} Element Distinctness I Black box: Computes f on inputs corresponding to elements of [n] I Question: Are there any i , j ∈ [n] × [n] such that f(i) = f(j)? I Query complexity: O(n2/3) (Ambainis 2003) Ω(n2/3) (Aaronson & Shi 2001)
Triangle Finding [(2014) non-walk algorithm by Le Gall: Oe(n5/4)] I Black box: For any pair u, v ∈ V × V tells whether there is an edge uv I Question: Is there any triangle in G? I Query complexity: O(n13/10) (Magniez, Sántha, Szegedy 2003)
6 / 24 Walks on the Johnson graph (Sántha arXiv:0808.0059 ) Vertices: {S ⊂ N : |S| = K}; Edges: {(S, S0): |S4S0| = 2} Element Distinctness I Black box: Computes f on inputs corresponding to elements of [n] I Question: Are there any i , j ∈ [n] × [n] such that f(i) = f(j)? I Query complexity: O(n2/3) (Ambainis 2003) Ω(n2/3) (Aaronson & Shi 2001)
Triangle Finding [(2014) non-walk algorithm by Le Gall: Oe(n5/4)] I Black box: For any pair u, v ∈ V × V tells whether there is an edge uv I Question: Is there any triangle in G? I Query complexity: O(n13/10) (Magniez, Sántha, Szegedy 2003)
Matrix Product Verification I Black box: Tells any entry of the n × n matrices A, B or C. I Question: Does AB = C hold? I Query complexity: O(n5/3) (Buhrman, Špalek 2004) 6 / 24 Block-encodings and
Quantum Singular Value Transformation One can efficiently construct block-encodings of I an efficiently implementable unitary U, I a sparse matrix with efficiently computable elements, I a matrix stored in a clever data-structure in a QRAM, I a density operator ρ given a unitary preparing its purification. I a POVM operator M given we can sample from the rand.var.: Tr(ρM),
Implementing arithmetic operations on block-encoded matrices
I Given block-encodings Aj we can implement convex combinations. I Given block-encodings A, B we can implement block-encoding of AB.
Block-encoding A way to represent large matrices on a quantum computer efficiently " # A . U = ⇐⇒ A = (h0|a ⊗ I)U |0ib ⊗ I . ..
7 / 24 I an efficiently implementable unitary U, I a sparse matrix with efficiently computable elements, I a matrix stored in a clever data-structure in a QRAM, I a density operator ρ given a unitary preparing its purification. I a POVM operator M given we can sample from the rand.var.: Tr(ρM),
Implementing arithmetic operations on block-encoded matrices
I Given block-encodings Aj we can implement convex combinations. I Given block-encodings A, B we can implement block-encoding of AB.
Block-encoding A way to represent large matrices on a quantum computer efficiently " # A . U = ⇐⇒ A = (h0|a ⊗ I)U |0ib ⊗ I . .. One can efficiently construct block-encodings of
7 / 24 I a sparse matrix with efficiently computable elements, I a matrix stored in a clever data-structure in a QRAM, I a density operator ρ given a unitary preparing its purification. I a POVM operator M given we can sample from the rand.var.: Tr(ρM),
Implementing arithmetic operations on block-encoded matrices
I Given block-encodings Aj we can implement convex combinations. I Given block-encodings A, B we can implement block-encoding of AB.
Block-encoding A way to represent large matrices on a quantum computer efficiently " # A . U = ⇐⇒ A = (h0|a ⊗ I)U |0ib ⊗ I . .. One can efficiently construct block-encodings of I an efficiently implementable unitary U,
7 / 24 I a matrix stored in a clever data-structure in a QRAM, I a density operator ρ given a unitary preparing its purification. I a POVM operator M given we can sample from the rand.var.: Tr(ρM),
Implementing arithmetic operations on block-encoded matrices
I Given block-encodings Aj we can implement convex combinations. I Given block-encodings A, B we can implement block-encoding of AB.
Block-encoding A way to represent large matrices on a quantum computer efficiently " # A . U = ⇐⇒ A = (h0|a ⊗ I)U |0ib ⊗ I . .. One can efficiently construct block-encodings of I an efficiently implementable unitary U, I a sparse matrix with efficiently computable elements,
7 / 24 I a density operator ρ given a unitary preparing its purification. I a POVM operator M given we can sample from the rand.var.: Tr(ρM),
Implementing arithmetic operations on block-encoded matrices
I Given block-encodings Aj we can implement convex combinations. I Given block-encodings A, B we can implement block-encoding of AB.
Block-encoding A way to represent large matrices on a quantum computer efficiently " # A . U = ⇐⇒ A = (h0|a ⊗ I)U |0ib ⊗ I . .. One can efficiently construct block-encodings of I an efficiently implementable unitary U, I a sparse matrix with efficiently computable elements, I a matrix stored in a clever data-structure in a QRAM,
7 / 24 I a POVM operator M given we can sample from the rand.var.: Tr(ρM),
Implementing arithmetic operations on block-encoded matrices
I Given block-encodings Aj we can implement convex combinations. I Given block-encodings A, B we can implement block-encoding of AB.
Block-encoding A way to represent large matrices on a quantum computer efficiently " # A . U = ⇐⇒ A = (h0|a ⊗ I)U |0ib ⊗ I . .. One can efficiently construct block-encodings of I an efficiently implementable unitary U, I a sparse matrix with efficiently computable elements, I a matrix stored in a clever data-structure in a QRAM, I a density operator ρ given a unitary preparing its purification.
7 / 24 Implementing arithmetic operations on block-encoded matrices
I Given block-encodings Aj we can implement convex combinations. I Given block-encodings A, B we can implement block-encoding of AB.
Block-encoding A way to represent large matrices on a quantum computer efficiently " # A . U = ⇐⇒ A = (h0|a ⊗ I)U |0ib ⊗ I . .. One can efficiently construct block-encodings of I an efficiently implementable unitary U, I a sparse matrix with efficiently computable elements, I a matrix stored in a clever data-structure in a QRAM, I a density operator ρ given a unitary preparing its purification. I a POVM operator M given we can sample from the rand.var.: Tr(ρM),
7 / 24 Block-encoding A way to represent large matrices on a quantum computer efficiently " # A . U = ⇐⇒ A = (h0|a ⊗ I)U |0ib ⊗ I . .. One can efficiently construct block-encodings of I an efficiently implementable unitary U, I a sparse matrix with efficiently computable elements, I a matrix stored in a clever data-structure in a QRAM, I a density operator ρ given a unitary preparing its purification. I a POVM operator M given we can sample from the rand.var.: Tr(ρM),
Implementing arithmetic operations on block-encoded matrices
I Given block-encodings Aj we can implement convex combinations. I Given block-encodings A, B we can implement block-encoding of AB.
7 / 24 Given ”sparse-access “ we can efficiently implement unitaries preparing ”rows“ √ X ( A )∗ R : |0i|0i|ii → |0i √ ik |ii|ki + |1i|ii|garbagei, k s
and ”columns“ p X A`j C : |0i|0i|ji → |0i √ |`i|ji + |2i|ji|garbagei, ` s
They form a block-encoding of A/s: √ ∗ † p X ( A ) X A`j Aij h0|h0|hi|R†C|0i|0i|ji= (R|0i|0i|ii)†·(C|0i|0i|ji)= √ ik |ii|ki √ |`i|ji = s k s ` s
Example: Block-encoding sparse matrices
Suppose that A is s-sparse and |Aij| ≤ 1 for all i, j indices.
8 / 24 √ X ( A )∗ R : |0i|0i|ii → |0i √ ik |ii|ki + |1i|ii|garbagei, k s
and ”columns“ p X A`j C : |0i|0i|ji → |0i √ |`i|ji + |2i|ji|garbagei, ` s
They form a block-encoding of A/s: √ ∗ † p X ( A ) X A`j Aij h0|h0|hi|R†C|0i|0i|ji= (R|0i|0i|ii)†·(C|0i|0i|ji)= √ ik |ii|ki √ |`i|ji = s k s ` s
Example: Block-encoding sparse matrices
Suppose that A is s-sparse and |Aij| ≤ 1 for all i, j indices. Given ”sparse-access “ we can efficiently implement unitaries preparing ”rows“
8 / 24 and ”columns“ p X A`j C : |0i|0i|ji → |0i √ |`i|ji + |2i|ji|garbagei, ` s
They form a block-encoding of A/s: √ ∗ † p X ( A ) X A`j Aij h0|h0|hi|R†C|0i|0i|ji= (R|0i|0i|ii)†·(C|0i|0i|ji)= √ ik |ii|ki √ |`i|ji = s k s ` s
Example: Block-encoding sparse matrices
Suppose that A is s-sparse and |Aij| ≤ 1 for all i, j indices. Given ”sparse-access “ we can efficiently implement unitaries preparing ”rows“ √ X ( A )∗ R : |0i|0i|ii → |0i √ ik |ii|ki + |1i|ii|garbagei, k s
8 / 24 They form a block-encoding of A/s: √ ∗ † p X ( A ) X A`j Aij h0|h0|hi|R†C|0i|0i|ji= (R|0i|0i|ii)†·(C|0i|0i|ji)= √ ik |ii|ki √ |`i|ji = s k s ` s
Example: Block-encoding sparse matrices
Suppose that A is s-sparse and |Aij| ≤ 1 for all i, j indices. Given ”sparse-access “ we can efficiently implement unitaries preparing ”rows“ √ X ( A )∗ R : |0i|0i|ii → |0i √ ik |ii|ki + |1i|ii|garbagei, k s and ”columns“ p X A`j C : |0i|0i|ji → |0i √ |`i|ji + |2i|ji|garbagei, ` s
8 / 24 √ ∗ † p X ( A ) X A`j Aij = (R|0i|0i|ii)†·(C|0i|0i|ji)= √ ik |ii|ki √ |`i|ji = s k s ` s
Example: Block-encoding sparse matrices
Suppose that A is s-sparse and |Aij| ≤ 1 for all i, j indices. Given ”sparse-access “ we can efficiently implement unitaries preparing ”rows“ √ X ( A )∗ R : |0i|0i|ii → |0i √ ik |ii|ki + |1i|ii|garbagei, k s and ”columns“ p X A`j C : |0i|0i|ji → |0i √ |`i|ji + |2i|ji|garbagei, ` s
They form a block-encoding of A/s: h0|h0|hi|R†C|0i|0i|ji
8 / 24 √ ∗ † p X ( A ) X A`j Aij = √ ik |ii|ki √ |`i|ji = s k s ` s
Example: Block-encoding sparse matrices
Suppose that A is s-sparse and |Aij| ≤ 1 for all i, j indices. Given ”sparse-access “ we can efficiently implement unitaries preparing ”rows“ √ X ( A )∗ R : |0i|0i|ii → |0i √ ik |ii|ki + |1i|ii|garbagei, k s and ”columns“ p X A`j C : |0i|0i|ji → |0i √ |`i|ji + |2i|ji|garbagei, ` s
They form a block-encoding of A/s: h0|h0|hi|R†C|0i|0i|ji= (R|0i|0i|ii)†·(C|0i|0i|ji)
8 / 24 A = ij s
Example: Block-encoding sparse matrices
Suppose that A is s-sparse and |Aij| ≤ 1 for all i, j indices. Given ”sparse-access “ we can efficiently implement unitaries preparing ”rows“ √ X ( A )∗ R : |0i|0i|ii → |0i √ ik |ii|ki + |1i|ii|garbagei, k s and ”columns“ p X A`j C : |0i|0i|ji → |0i √ |`i|ji + |2i|ji|garbagei, ` s
They form a block-encoding of A/s: √ ∗ † p X ( A ) X A`j h |h |h | † | i| i| i= ( | i| i| i)†·( | i| i| i)= ik | i| i | i| i 0 0 i R C 0 0 j R 0 0 i C 0 0 j √ i k √ ` j k s ` s
8 / 24 Example: Block-encoding sparse matrices
Suppose that A is s-sparse and |Aij| ≤ 1 for all i, j indices. Given ”sparse-access “ we can efficiently implement unitaries preparing ”rows“ √ X ( A )∗ R : |0i|0i|ii → |0i √ ik |ii|ki + |1i|ii|garbagei, k s and ”columns“ p X A`j C : |0i|0i|ji → |0i √ |`i|ji + |2i|ji|garbagei, ` s
They form a block-encoding of A/s: √ ∗ † p X ( A ) X A`j Aij h0|h0|hi|R†C|0i|0i|ji= (R|0i|0i|ii)†·(C|0i|0i|ji)= √ ik |ii|ki √ |`i|ji = s k s ` s
8 / 24 Suppose that " # " # " # A . P ς |w ihv | . P P(ς )|w ihv | . U = = i i i i =⇒ U = i i i i , .. .. Φ ..
d where Φ(P) ∈ R is efficiently computable and UΦ is the following circuit:
Alternating phase modulation sequence UΦ :=
H e−iφ1σz e−iφ2σz ··· e−iφd σz H ··· | i⊗a 0 ··· U U† ··· ··· ··· Simmilar result holds for even polynomials.
Quantum Singular Value Transformation (QSVT)
Main theorem about QSVT (G, Su, Low, Wiebe 2018) Let P :[−1, 1] → [−1, 1] be a degree-d odd polynomial map.
9 / 24 " # P P(ς )|w ihv | . =⇒ U = i i i i , Φ ..
d where Φ(P) ∈ R is efficiently computable and UΦ is the following circuit:
Alternating phase modulation sequence UΦ :=
H e−iφ1σz e−iφ2σz ··· e−iφd σz H ··· | i⊗a 0 ··· U U† ··· ··· ··· Simmilar result holds for even polynomials.
Quantum Singular Value Transformation (QSVT)
Main theorem about QSVT (G, Su, Low, Wiebe 2018) Let P :[−1, 1] → [−1, 1] be a degree-d odd polynomial map. Suppose that " # " # A . P ς |w ihv | . U = = i i i i .. ..
9 / 24 d where Φ(P) ∈ R is efficiently computable and UΦ is the following circuit:
Alternating phase modulation sequence UΦ :=
H e−iφ1σz e−iφ2σz ··· e−iφd σz H ··· | i⊗a 0 ··· U U† ··· ··· ··· Simmilar result holds for even polynomials.
Quantum Singular Value Transformation (QSVT)
Main theorem about QSVT (G, Su, Low, Wiebe 2018) Let P :[−1, 1] → [−1, 1] be a degree-d odd polynomial map. Suppose that " # " # " # A . P ς |w ihv | . P P(ς )|w ihv | . U = = i i i i =⇒ U = i i i i , .. .. Φ ..
9 / 24 Alternating phase modulation sequence UΦ :=
H e−iφ1σz e−iφ2σz ··· e−iφd σz H ··· | i⊗a 0 ··· U U† ··· ··· ··· Simmilar result holds for even polynomials.
Quantum Singular Value Transformation (QSVT)
Main theorem about QSVT (G, Su, Low, Wiebe 2018) Let P :[−1, 1] → [−1, 1] be a degree-d odd polynomial map. Suppose that " # " # " # A . P ς |w ihv | . P P(ς )|w ihv | . U = = i i i i =⇒ U = i i i i , .. .. Φ ..
d where Φ(P) ∈ R is efficiently computable and UΦ is the following circuit:
9 / 24 Simmilar result holds for even polynomials.
Quantum Singular Value Transformation (QSVT)
Main theorem about QSVT (G, Su, Low, Wiebe 2018) Let P :[−1, 1] → [−1, 1] be a degree-d odd polynomial map. Suppose that " # " # " # A . P ς |w ihv | . P P(ς )|w ihv | . U = = i i i i =⇒ U = i i i i , .. .. Φ ..
d where Φ(P) ∈ R is efficiently computable and UΦ is the following circuit:
Alternating phase modulation sequence UΦ :=
H e−iφ1σz e−iφ2σz ··· e−iφd σz H ··· | i⊗a 0 ··· U U† ··· ··· ···
9 / 24 Quantum Singular Value Transformation (QSVT)
Main theorem about QSVT (G, Su, Low, Wiebe 2018) Let P :[−1, 1] → [−1, 1] be a degree-d odd polynomial map. Suppose that " # " # " # A . P ς |w ihv | . P P(ς )|w ihv | . U = = i i i i =⇒ U = i i i i , .. .. Φ ..
d where Φ(P) ∈ R is efficiently computable and UΦ is the following circuit:
Alternating phase modulation sequence UΦ :=
H e−iφ1σz e−iφ2σz ··· e−iφd σz H ··· | i⊗a 0 ··· U U† ··· ··· ···
Simmilar result holds for even polynomials. 9 / 24 ¯ ¯ √ ¯ Note that (|0ih0| ⊗ I)U(|0ih0|) = p|0, ψgoodih0|; we can apply QSVT.
Singular vector transformation [Oblivious ampl. ampl. (Berry et al. 2013)] Given a unitary U, such that k ⊗a ⊗b X A = (h0| ⊗ I)U(|0i ⊗ I) = ςi|φiihψi| i=1 is a singular value decomposition. Transform one copy of a quantum state Xk Xk |ψi = αi|ψii to |φi = αi|φii. i=i i=i 1 1 If ςi ≥ δ for all 0 , αi, we can ε-apx. using QSVT with compl. O δ log ε .
Singular vector transformation and projection Fixed-point amplitude ampl. (Yoder, Low, Chuang 2014) Amplitude amplification problem: Given U such that √ ¯ p U|0i = p|0i|ψgoodi + 1 − p|1i|ψbadi, prepare |ψgoodi.
10 / 24 Singular vector transformation [Oblivious ampl. ampl. (Berry et al. 2013)] Given a unitary U, such that k ⊗a ⊗b X A = (h0| ⊗ I)U(|0i ⊗ I) = ςi|φiihψi| i=1 is a singular value decomposition. Transform one copy of a quantum state Xk Xk |ψi = αi|ψii to |φi = αi|φii. i=i i=i 1 1 If ςi ≥ δ for all 0 , αi, we can ε-apx. using QSVT with compl. O δ log ε .
Singular vector transformation and projection Fixed-point amplitude ampl. (Yoder, Low, Chuang 2014) Amplitude amplification problem: Given U such that √ ¯ p U|0i = p|0i|ψgoodi + 1 − p|1i|ψbadi, prepare |ψgoodi. ¯ ¯ √ ¯ Note that (|0ih0| ⊗ I)U(|0ih0|) = p|0, ψgoodih0|; we can apply QSVT.
10 / 24 Transform one copy of a quantum state Xk Xk |ψi = αi|ψii to |φi = αi|φii. i=i i=i 1 1 If ςi ≥ δ for all 0 , αi, we can ε-apx. using QSVT with compl. O δ log ε .
Singular vector transformation and projection Fixed-point amplitude ampl. (Yoder, Low, Chuang 2014) Amplitude amplification problem: Given U such that √ ¯ p U|0i = p|0i|ψgoodi + 1 − p|1i|ψbadi, prepare |ψgoodi. ¯ ¯ √ ¯ Note that (|0ih0| ⊗ I)U(|0ih0|) = p|0, ψgoodih0|; we can apply QSVT.
Singular vector transformation [Oblivious ampl. ampl. (Berry et al. 2013)] Given a unitary U, such that k ⊗a ⊗b X A = (h0| ⊗ I)U(|0i ⊗ I) = ςi|φiihψi| i=1 is a singular value decomposition.
10 / 24 1 1 If ςi ≥ δ for all 0 , αi, we can ε-apx. using QSVT with compl. O δ log ε .
Singular vector transformation and projection Fixed-point amplitude ampl. (Yoder, Low, Chuang 2014) Amplitude amplification problem: Given U such that √ ¯ p U|0i = p|0i|ψgoodi + 1 − p|1i|ψbadi, prepare |ψgoodi. ¯ ¯ √ ¯ Note that (|0ih0| ⊗ I)U(|0ih0|) = p|0, ψgoodih0|; we can apply QSVT.
Singular vector transformation [Oblivious ampl. ampl. (Berry et al. 2013)] Given a unitary U, such that k ⊗a ⊗b X A = (h0| ⊗ I)U(|0i ⊗ I) = ςi|φiihψi| i=1 is a singular value decomposition. Transform one copy of a quantum state Xk Xk |ψi = αi|ψii to |φi = αi|φii. i=i i=i
10 / 24 Singular vector transformation and projection Fixed-point amplitude ampl. (Yoder, Low, Chuang 2014) Amplitude amplification problem: Given U such that √ ¯ p U|0i = p|0i|ψgoodi + 1 − p|1i|ψbadi, prepare |ψgoodi. ¯ ¯ √ ¯ Note that (|0ih0| ⊗ I)U(|0ih0|) = p|0, ψgoodih0|; we can apply QSVT.
Singular vector transformation [Oblivious ampl. ampl. (Berry et al. 2013)] Given a unitary U, such that k ⊗a ⊗b X A = (h0| ⊗ I)U(|0i ⊗ I) = ςi|φiihψi| i=1 is a singular value decomposition. Transform one copy of a quantum state Xk Xk |ψi = αi|ψii to |φi = αi|φii. i=i i=i 1 1 If ςi ≥ δ for all 0 , αi, we can ε-apx. using QSVT with compl. O δ log ε . 10 / 24 † P (note A = i ςi|viihwi|).
Direct implementation of the pseudoinverse (HHL) Singular value decomposition and pseudoinverse P Suppose A = i ςi|wiihvi| is a singular value decomposition. + P Then the pseudoinverse of A is A = i 1/ςi|viihwi|
11 / 24 Direct implementation of the pseudoinverse (HHL) Singular value decomposition and pseudoinverse P Suppose A = i ςi|wiihvi| is a singular value decomposition. + P † P Then the pseudoinverse of A is A = i 1/ςi|viihwi| (note A = i ςi|viihwi|).
11 / 24 Direct implementation of the pseudoinverse (HHL) Singular value decomposition and pseudoinverse P Suppose A = i ςi|wiihvi| is a singular value decomposition. + P † P Then the pseudoinverse of A is A = i 1/ςi|viihwi| (note A = i ςi|viihwi|).
Implementing the pseudoinverse using QSVT
−1 −0.5 0.5 1
11 / 24 Direct implementation of the pseudoinverse (HHL) Singular value decomposition and pseudoinverse P Suppose A = i ςi|wiihvi| is a singular value decomposition. + P † P Then the pseudoinverse of A is A = i 1/ςi|viihwi| (note A = i ςi|viihwi|).
Implementing the pseudoinverse using QSVT
−1 −0.5 0.5 1
11 / 24 Direct implementation of the pseudoinverse (HHL) Singular value decomposition and pseudoinverse P Suppose A = i ςi|wiihvi| is a singular value decomposition. + P † P Then the pseudoinverse of A is A = i 1/ςi|viihwi| (note A = i ςi|viihwi|).
Implementing the pseudoinverse using QSVT
−1 −0.5 0.5 1
11 / 24 Direct implementation of the pseudoinverse (HHL) Singular value decomposition and pseudoinverse P Suppose A = i ςi|wiihvi| is a singular value decomposition. + P † P Then the pseudoinverse of A is A = i 1/ςi|viihwi| (note A = i ςi|viihwi|).
Implementing the pseudoinverse using QSVT
Suppose that U is an a-qubit block-encoding of A, and A + ≤ κ.
11 / 24 Direct implementation of the pseudoinverse (HHL) Singular value decomposition and pseudoinverse P Suppose A = i ςi|wiihvi| is a singular value decomposition. + P † P Then the pseudoinverse of A is A = i 1/ςi|viihwi| (note A = i ςi|viihwi|).
Implementing the pseudoinverse using QSVT
Suppose that U is an a-qubit block-encoding of A, and A + ≤ κ. By QSVT we can implement an ε-approximate block-encoding of
1 A +, 2κ
1 using O κ log ε queries to U. Finally amplify the result (O(κ) times).
11 / 24 Direct implementation of the pseudoinverse (HHL) Singular value decomposition and pseudoinverse P Suppose A = i ςi|wiihvi| is a singular value decomposition. + P † P Then the pseudoinverse of A is A = i 1/ςi|viihwi| (note A = i ςi|viihwi|).
Implementing the pseudoinverse using QSVT
Suppose that U is an a-qubit block-encoding of A, and A + ≤ κ. By QSVT we can implement an ε-approximate block-encoding of
1 A +, 2κ
1 using O κ log ε queries to U. Finally amplify the result (O(κ) times). I Complexity can be improved to Oe(κ) using variable-time amplitude-amplification. I Other variants are possible, such as weighted and generalized least-squares.
11 / 24 If H is Hermitian, then P(H) coincides with the singular value transform. (Direct corollary: Fast-Forwarding Markov Chains.)
Removing parity constraint for Hermitian matrices − → − 1 1 Let P :[ 1, 1] [ 2 , 2 ] be a degree-d polynomial map. Suppose that U is an a-qubit block-encoding of a Hermitian matrix H. We can implement " # P(H) . U0 = , .. using d times U and U†, 1 controlled U, and O(ad) extra two-qubit gates.
The special case of Hermitian matrices Singular value transf. = eigenvalue transf. [Low & Chuang (2017)] Let P :[−1, 1] → [−1, 1] be a degree-d even/odd polynomial map.
12 / 24 (Direct corollary: Fast-Forwarding Markov Chains.)
Removing parity constraint for Hermitian matrices − → − 1 1 Let P :[ 1, 1] [ 2 , 2 ] be a degree-d polynomial map. Suppose that U is an a-qubit block-encoding of a Hermitian matrix H. We can implement " # P(H) . U0 = , .. using d times U and U†, 1 controlled U, and O(ad) extra two-qubit gates.
The special case of Hermitian matrices Singular value transf. = eigenvalue transf. [Low & Chuang (2017)] Let P :[−1, 1] → [−1, 1] be a degree-d even/odd polynomial map. If H is Hermitian, then P(H) coincides with the singular value transform.
12 / 24 Removing parity constraint for Hermitian matrices − → − 1 1 Let P :[ 1, 1] [ 2 , 2 ] be a degree-d polynomial map. Suppose that U is an a-qubit block-encoding of a Hermitian matrix H. We can implement " # P(H) . U0 = , .. using d times U and U†, 1 controlled U, and O(ad) extra two-qubit gates.
The special case of Hermitian matrices Singular value transf. = eigenvalue transf. [Low & Chuang (2017)] Let P :[−1, 1] → [−1, 1] be a degree-d even/odd polynomial map. If H is Hermitian, then P(H) coincides with the singular value transform. (Direct corollary: Fast-Forwarding Markov Chains.)
12 / 24 We can implement " # P(H) . U0 = , .. using d times U and U†, 1 controlled U, and O(ad) extra two-qubit gates.
The special case of Hermitian matrices Singular value transf. = eigenvalue transf. [Low & Chuang (2017)] Let P :[−1, 1] → [−1, 1] be a degree-d even/odd polynomial map. If H is Hermitian, then P(H) coincides with the singular value transform. (Direct corollary: Fast-Forwarding Markov Chains.)
Removing parity constraint for Hermitian matrices − → − 1 1 Let P :[ 1, 1] [ 2 , 2 ] be a degree-d polynomial map. Suppose that U is an a-qubit block-encoding of a Hermitian matrix H.
12 / 24 The special case of Hermitian matrices Singular value transf. = eigenvalue transf. [Low & Chuang (2017)] Let P :[−1, 1] → [−1, 1] be a degree-d even/odd polynomial map. If H is Hermitian, then P(H) coincides with the singular value transform. (Direct corollary: Fast-Forwarding Markov Chains.)
Removing parity constraint for Hermitian matrices − → − 1 1 Let P :[ 1, 1] [ 2 , 2 ] be a degree-d polynomial map. Suppose that U is an a-qubit block-encoding of a Hermitian matrix H. We can implement " # P(H) . U0 = , .. using d times U and U†, 1 controlled U, and O(ad) extra two-qubit gates.
12 / 24 Complexity of block-Hamiltonians simulation [Low & Chuang (2016)] Given t, ε > 0, implement a unitary U0, which is ε close to eitH. Can be achieved with query complexity
O(t + log(1/ε)). Gate complexity is O(a) times the above.
Proof sketch Approximate to ε-precision sin(tx) and cos(tx) with polynomials of degree as above. Then use QSVT and combine even/odd parts.
Optimal block-Hamiltonian simulation " # H . Suppose that H is given as an a-qubit block-encoding, i.e., U = . ..
13 / 24 Proof sketch Approximate to ε-precision sin(tx) and cos(tx) with polynomials of degree as above. Then use QSVT and combine even/odd parts.
Optimal block-Hamiltonian simulation " # H . Suppose that H is given as an a-qubit block-encoding, i.e., U = . ..
Complexity of block-Hamiltonians simulation [Low & Chuang (2016)] Given t, ε > 0, implement a unitary U0, which is ε close to eitH. Can be achieved with query complexity
O(t + log(1/ε)). Gate complexity is O(a) times the above.
13 / 24 Optimal block-Hamiltonian simulation " # H . Suppose that H is given as an a-qubit block-encoding, i.e., U = . ..
Complexity of block-Hamiltonians simulation [Low & Chuang (2016)] Given t, ε > 0, implement a unitary U0, which is ε close to eitH. Can be achieved with query complexity
O(t + log(1/ε)). Gate complexity is O(a) times the above.
Proof sketch Approximate to ε-precision sin(tx) and cos(tx) with polynomials of degree as above. Then use QSVT and combine even/odd parts.
13 / 24 The basic algorithm (Inspired by Poulin & Wocjan 2009)
− β H I Start with a maximally mixed state I/N, and apply the map e 2 . β β − 2 H I − 2 H −βH I The new state is proportional to e N e = e . I Finally, apply amplitude amplification. Probability of success?
Final algorithm I Use minimum finding (Dürr & Høyer 1996; van Apeldoorn, G, Gribling, de Wolf
2017) to find an approximation of the ground state energy H0. − β (H−H I) I Use the previous procedure but with the map e 2 0 √ I The final complexity is Oe β N
Gibbs sampling " # H . Suppose that H is given as U = . The goal is to prepare ρ ∝ e−βH. ..
14 / 24 β β − 2 H I − 2 H −βH I The new state is proportional to e N e = e . I Finally, apply amplitude amplification. Probability of success?
Final algorithm I Use minimum finding (Dürr & Høyer 1996; van Apeldoorn, G, Gribling, de Wolf
2017) to find an approximation of the ground state energy H0. − β (H−H I) I Use the previous procedure but with the map e 2 0 √ I The final complexity is Oe β N
Gibbs sampling " # H . Suppose that H is given as U = . The goal is to prepare ρ ∝ e−βH. ..
The basic algorithm (Inspired by Poulin & Wocjan 2009)
− β H I Start with a maximally mixed state I/N, and apply the map e 2 .
14 / 24 I Finally, apply amplitude amplification. Probability of success?
Final algorithm I Use minimum finding (Dürr & Høyer 1996; van Apeldoorn, G, Gribling, de Wolf
2017) to find an approximation of the ground state energy H0. − β (H−H I) I Use the previous procedure but with the map e 2 0 √ I The final complexity is Oe β N
Gibbs sampling " # H . Suppose that H is given as U = . The goal is to prepare ρ ∝ e−βH. ..
The basic algorithm (Inspired by Poulin & Wocjan 2009)
− β H I Start with a maximally mixed state I/N, and apply the map e 2 . β β − 2 H I − 2 H −βH I The new state is proportional to e N e = e .
14 / 24 Final algorithm I Use minimum finding (Dürr & Høyer 1996; van Apeldoorn, G, Gribling, de Wolf
2017) to find an approximation of the ground state energy H0. − β (H−H I) I Use the previous procedure but with the map e 2 0 √ I The final complexity is Oe β N
Gibbs sampling " # H . Suppose that H is given as U = . The goal is to prepare ρ ∝ e−βH. ..
The basic algorithm (Inspired by Poulin & Wocjan 2009)
− β H I Start with a maximally mixed state I/N, and apply the map e 2 . β β − 2 H I − 2 H −βH I The new state is proportional to e N e = e . I Finally, apply amplitude amplification. Probability of success?
14 / 24 − β (H−H I) I Use the previous procedure but with the map e 2 0 √ I The final complexity is Oe β N
Gibbs sampling " # H . Suppose that H is given as U = . The goal is to prepare ρ ∝ e−βH. ..
The basic algorithm (Inspired by Poulin & Wocjan 2009)
− β H I Start with a maximally mixed state I/N, and apply the map e 2 . β β − 2 H I − 2 H −βH I The new state is proportional to e N e = e . I Finally, apply amplitude amplification. Probability of success?
Final algorithm I Use minimum finding (Dürr & Høyer 1996; van Apeldoorn, G, Gribling, de Wolf
2017) to find an approximation of the ground state energy H0.
14 / 24 Gibbs sampling " # H . Suppose that H is given as U = . The goal is to prepare ρ ∝ e−βH. ..
The basic algorithm (Inspired by Poulin & Wocjan 2009)
− β H I Start with a maximally mixed state I/N, and apply the map e 2 . β β − 2 H I − 2 H −βH I The new state is proportional to e N e = e . I Finally, apply amplitude amplification. Probability of success?
Final algorithm I Use minimum finding (Dürr & Høyer 1996; van Apeldoorn, G, Gribling, de Wolf
2017) to find an approximation of the ground state energy H0. − β (H−H I) I Use the previous procedure but with the map e 2 0 √ I The final complexity is Oe β N
14 / 24 Some other applications I Fast QMA amplification, fast quantum OR lemma I Quantum Machine learning: PCA, principal component regression I “Non-commutative measurements” (for ground state preparation) I Fractional queries . I .
Summarizing the various speed-ups by QSVT
Speed-up Source of speed-up Examples of algorithms Dimensionality of the Hilbert space Hamiltonian simulation Exponential Precise polynomial approximations Improved HHL algorithm Singular value = square root of probability Grover search Quadratic Singular values are easier to distinguish Amplitude estimation Close-to-1 singular values are more flexible Quantum walks
15 / 24 Summarizing the various speed-ups by QSVT
Speed-up Source of speed-up Examples of algorithms Dimensionality of the Hilbert space Hamiltonian simulation Exponential Precise polynomial approximations Improved HHL algorithm Singular value = square root of probability Grover search Quadratic Singular values are easier to distinguish Amplitude estimation Close-to-1 singular values are more flexible Quantum walks
Some other applications I Fast QMA amplification, fast quantum OR lemma I Quantum Machine learning: PCA, principal component regression I “Non-commutative measurements” (for ground state preparation) I Fractional queries . I .
15 / 24 Quantum algorithms for optimization Discrete structures: I Finding the shortest path in a graph O n2 (Dijkstra 1956); quantum Oe n3/2 (Dürr, Heiligman, Høyer, Mhalla 2004) I Matching and flow problems: Polynomial speed-ups, typically based on Grover search I NP−hard problems: Quadratic speed-ups for Schöning’s algorithm for 3-SAT (Ampl. ampl.) Quadratic speed-ups for backtracking (Montanaro 2015) Polynomial speed-ups for dynamical programming, e.g., TSP 2n → 1.73n (Ambainis, Balodis, Iraids, Kokainis, Prusis,¯ Vihrovs 2018)
Optimization
In general we want to find the best solution minx∈X f(x) √ I Unstructured: can be solved with O |X| queries (Dürr & Høyer 1996)
16 / 24 I NP−hard problems: Quadratic speed-ups for Schöning’s algorithm for 3-SAT (Ampl. ampl.) Quadratic speed-ups for backtracking (Montanaro 2015) Polynomial speed-ups for dynamical programming, e.g., TSP 2n → 1.73n (Ambainis, Balodis, Iraids, Kokainis, Prusis,¯ Vihrovs 2018)
Optimization
In general we want to find the best solution minx∈X f(x) √ I Unstructured: can be solved with O |X| queries (Dürr & Høyer 1996)
Discrete structures: I Finding the shortest path in a graph O n2 (Dijkstra 1956); quantum Oe n3/2 (Dürr, Heiligman, Høyer, Mhalla 2004) I Matching and flow problems: Polynomial speed-ups, typically based on Grover search
16 / 24 Optimization
In general we want to find the best solution minx∈X f(x) √ I Unstructured: can be solved with O |X| queries (Dürr & Høyer 1996)
Discrete structures: I Finding the shortest path in a graph O n2 (Dijkstra 1956); quantum Oe n3/2 (Dürr, Heiligman, Høyer, Mhalla 2004) I Matching and flow problems: Polynomial speed-ups, typically based on Grover search I NP−hard problems: Quadratic speed-ups for Schöning’s algorithm for 3-SAT (Ampl. ampl.) Quadratic speed-ups for backtracking (Montanaro 2015) Polynomial speed-ups for dynamical programming, e.g., TSP 2n → 1.73n (Ambainis, Balodis, Iraids, Kokainis, Prusis,¯ Vihrovs 2018)
16 / 24 Application of the SDP solver: Shadow tomography (Aaronson 2017) Given n-dimensional quantum state ρ, estimate the probability of acceptance of each
the two-outcome measurements E1,..., Em, to within additive error ε. I log4(m) log(n)! Oe copies suffice ε4
Continuous optimization
Convex optimization I Linear programs, semidefinite programs √ √ SDPs: Oe ( n + m )sγ5 (Brandão et al., van Apeldoorn et al. 2016-18) √ √ Zero-sum games: Oe ( n + m )/ε3 , Oe s/ε3.5 (Apeldoorn & G 2019) I Quantum interior point method (Kerenidis & Prakash 2018) I Polynomial speed-up for estimating volumes of convex bodies (Chakrabarti, Childs, Hung, Li, Wang, Wu 2019)
17 / 24 Continuous optimization
Convex optimization I Linear programs, semidefinite programs √ √ SDPs: Oe ( n + m )sγ5 (Brandão et al., van Apeldoorn et al. 2016-18) √ √ Zero-sum games: Oe ( n + m )/ε3 , Oe s/ε3.5 (Apeldoorn & G 2019) I Quantum interior point method (Kerenidis & Prakash 2018) I Polynomial speed-up for estimating volumes of convex bodies (Chakrabarti, Childs, Hung, Li, Wang, Wu 2019)
Application of the SDP solver: Shadow tomography (Aaronson 2017) Given n-dimensional quantum state ρ, estimate the probability of acceptance of each the two-outcome measurements E1,..., Em, to within additive error ε. I log4(m) log(n)! Oe copies suffice ε4
17 / 24 Algorithm for approximate Nash-equilibrium (Grigoriadis & Khachiyan 1995) (0) ← ∈ Rn (0) ← ∈ Rm Start with x 0 and y 0 O 1 for t = 1, 2,..., e ε2 do ( ) ( ) • P(t) ← e−A T x t and Q(t) ← eAy t
• p(t) ← P(t)/ P(t) and q(t) ← Q(t)/ Q(t) 1 1 • Sample a ∼ p(t) and b ∼ q(t) • (t+1) (t) ε (t+1) (t) ε y = y + 4 ea and x = x + 4 eb
The main taks is Gibbs sampling from a linear-combination of vectors. For SDPs: we need to Gibbs sample from a linear-combination of matrices.
Zero-sum games Pay-off matrix of Alice is A ∈ Rm×n. Expected pay-off for strategies x, y: xT Ay
18 / 24 O 1 for t = 1, 2,..., e ε2 do ( ) ( ) • P(t) ← e−A T x t and Q(t) ← eAy t
• p(t) ← P(t)/ P(t) and q(t) ← Q(t)/ Q(t) 1 1 • Sample a ∼ p(t) and b ∼ q(t) • (t+1) (t) ε (t+1) (t) ε y = y + 4 ea and x = x + 4 eb
The main taks is Gibbs sampling from a linear-combination of vectors. For SDPs: we need to Gibbs sample from a linear-combination of matrices.
Zero-sum games Pay-off matrix of Alice is A ∈ Rm×n. Expected pay-off for strategies x, y: xT Ay Algorithm for approximate Nash-equilibrium (Grigoriadis & Khachiyan 1995) Start with x(0) ← 0 ∈ Rn and y(0) ← 0 ∈ Rm
18 / 24 The main taks is Gibbs sampling from a linear-combination of vectors. For SDPs: we need to Gibbs sample from a linear-combination of matrices.
Zero-sum games Pay-off matrix of Alice is A ∈ Rm×n. Expected pay-off for strategies x, y: xT Ay Algorithm for approximate Nash-equilibrium (Grigoriadis & Khachiyan 1995) (0) ← ∈ Rn (0) ← ∈ Rm Start with x 0 and y 0 O 1 for t = 1, 2,..., e ε2 do ( ) ( ) • P(t) ← e−A T x t and Q(t) ← eAy t
• p(t) ← P(t)/ P(t) and q(t) ← Q(t)/ Q(t) 1 1 • Sample a ∼ p(t) and b ∼ q(t) • (t+1) (t) ε (t+1) (t) ε y = y + 4 ea and x = x + 4 eb
18 / 24 For SDPs: we need to Gibbs sample from a linear-combination of matrices.
Zero-sum games Pay-off matrix of Alice is A ∈ Rm×n. Expected pay-off for strategies x, y: xT Ay Algorithm for approximate Nash-equilibrium (Grigoriadis & Khachiyan 1995) (0) ← ∈ Rn (0) ← ∈ Rm Start with x 0 and y 0 O 1 for t = 1, 2,..., e ε2 do ( ) ( ) • P(t) ← e−A T x t and Q(t) ← eAy t
• p(t) ← P(t)/ P(t) and q(t) ← Q(t)/ Q(t) 1 1 • Sample a ∼ p(t) and b ∼ q(t) • (t+1) (t) ε (t+1) (t) ε y = y + 4 ea and x = x + 4 eb
The main taks is Gibbs sampling from a linear-combination of vectors.
18 / 24 Zero-sum games Pay-off matrix of Alice is A ∈ Rm×n. Expected pay-off for strategies x, y: xT Ay Algorithm for approximate Nash-equilibrium (Grigoriadis & Khachiyan 1995) (0) ← ∈ Rn (0) ← ∈ Rm Start with x 0 and y 0 O 1 for t = 1, 2,..., e ε2 do ( ) ( ) • P(t) ← e−A T x t and Q(t) ← eAy t
• p(t) ← P(t)/ P(t) and q(t) ← Q(t)/ Q(t) 1 1 • Sample a ∼ p(t) and b ∼ q(t) • (t+1) (t) ε (t+1) (t) ε y = y + 4 ea and x = x + 4 eb
The main taks is Gibbs sampling from a linear-combination of vectors. For SDPs: we need to Gibbs sample from a linear-combination of matrices.
18 / 24 I Testing equality of a distribution on [n] (with query complexity) To an unknown distribution Oe n1/2 (Bravyi, Hassidim, Harrow 2009; G, Li 2019) To a known distribution Oe n1/3 (Chakraborty, Fischer, Matsliah, de Wolf 2010)
I Estimating the (Shannon / von Neumann) entropy of a distribution on [n] classical distribution: query complexity Oe n1/2 (Li & Wu 2017) density operator: query complexity Oe(n) (G & Li 2019)
Statistics, estimation and stochastic algorithms
σ I Quadratic speed-up for Monte-Carlo methods O ε (Montanaro 2015) Generalizes approximate counting (Brassard, Høyer, Mosca, Tapp 1998)
19 / 24 Statistics, estimation and stochastic algorithms
σ I Quadratic speed-up for Monte-Carlo methods O ε (Montanaro 2015) Generalizes approximate counting (Brassard, Høyer, Mosca, Tapp 1998)
I Testing equality of a distribution on [n] (with query complexity) To an unknown distribution Oe n1/2 (Bravyi, Hassidim, Harrow 2009; G, Li 2019) To a known distribution Oe n1/3 (Chakraborty, Fischer, Matsliah, de Wolf 2010)
I Estimating the (Shannon / von Neumann) entropy of a distribution on [n] classical distribution: query complexity Oe n1/2 (Li & Wu 2017) density operator: query complexity Oe(n) (G & Li 2019)
19 / 24 Statistics, estimation and stochastic algorithms
σ I Quadratic speed-up for Monte-Carlo methods O ε (Montanaro 2015) Generalizes approximate counting (Brassard, Høyer, Mosca, Tapp 1998)
I Testing equality of a distribution on [n] (with query complexity) To an unknown distribution Oe n1/2 (Bravyi, Hassidim, Harrow 2009; G, Li 2019) To a known distribution Oe n1/3 (Chakraborty, Fischer, Matsliah, de Wolf 2010)
I Estimating the (Shannon / von Neumann) entropy of a distribution on [n] classical distribution: query complexity Oe n1/2 (Li & Wu 2017) density operator: query complexity Oe(n) (G & Li 2019)
19 / 24 Recommendation systems – Netflix challange
Image source: https://towardsdatascience.com 20 / 24 The assumed structure of preference matrix: Movies: a linear combination of a small number of features User taste: a linear weighing of the features
Image source: https://towardsdatascience.com
21 / 24 I Given A ∈ Rm×n construct quantum circuit (block-encoding) ! A/kAk . U = F . ..
How to preserve the exponential advantage?
Major difficulty: how to input the data? Data conversion: classical to quantum
I Given b ∈ Rm prepare Xm b |bi = i kbk i=1
22 / 24 Major difficulty: how to input the data? Data conversion: classical to quantum
I Given b ∈ Rm prepare Xm b |bi = i kbk i=1
I Given A ∈ Rm×n construct quantum circuit (block-encoding) ! A/kAk . U = F . ..
How to preserve the exponential advantage?
22 / 24 Data structure
P3 2 i=0 |bi|
P1 2 P3 2 i=0 |bi| i=2 |bi|
b0 b1 b2 b3
q q P1 2 P3 2 Firs prepare: i=0 |bi| |0i + i=2 |bi| |1i q q P1 2 P3 2 Map i=0 |bi| |0i 7→ |b0||00i + |b1||01i and i=2 |bi| |10i 7→ |b2||11i + |b3||01i
Add phases to get b0|00i + b1|01i + b2|10i + b3|11i
Solution: assume QRAM (readable in superposition)
23 / 24 q q P1 2 P3 2 Firs prepare: i=0 |bi| |0i + i=2 |bi| |1i q q P1 2 P3 2 Map i=0 |bi| |0i 7→ |b0||00i + |b1||01i and i=2 |bi| |10i 7→ |b2||11i + |b3||01i
Add phases to get b0|00i + b1|01i + b2|10i + b3|11i
Solution: assume QRAM (readable in superposition) Data structure
P3 2 i=0 |bi|
P1 2 P3 2 i=0 |bi| i=2 |bi|
b0 b1 b2 b3
23 / 24 q q P1 2 P3 2 Map i=0 |bi| |0i 7→ |b0||00i + |b1||01i and i=2 |bi| |10i 7→ |b2||11i + |b3||01i
Add phases to get b0|00i + b1|01i + b2|10i + b3|11i
Solution: assume QRAM (readable in superposition) Data structure
P3 2 i=0 |bi|
P1 2 P3 2 i=0 |bi| i=2 |bi|
b0 b1 b2 b3
q q P1 2 P3 2 Firs prepare: i=0 |bi| |0i + i=2 |bi| |1i
23 / 24 q P3 2 i=2 |bi| |10i 7→ |b2||11i + |b3||01i
Add phases to get b0|00i + b1|01i + b2|10i + b3|11i
Solution: assume QRAM (readable in superposition) Data structure
P3 2 i=0 |bi|
P1 2 P3 2 i=0 |bi| i=2 |bi|
b0 b1 b2 b3
q q P1 2 P3 2 Firs prepare: i=0 |bi| |0i + i=2 |bi| |1i q P1 2 Map i=0 |bi| |0i 7→ |b0||00i + |b1||01i and
23 / 24 Add phases to get b0|00i + b1|01i + b2|10i + b3|11i
Solution: assume QRAM (readable in superposition) Data structure
P3 2 i=0 |bi|
P1 2 P3 2 i=0 |bi| i=2 |bi|
b0 b1 b2 b3
q q P1 2 P3 2 Firs prepare: i=0 |bi| |0i + i=2 |bi| |1i q q P1 2 P3 2 Map i=0 |bi| |0i 7→ |b0||00i + |b1||01i and i=2 |bi| |10i 7→ |b2||11i + |b3||01i
23 / 24 Solution: assume QRAM (readable in superposition) Data structure
P3 2 i=0 |bi|
P1 2 P3 2 i=0 |bi| i=2 |bi|
b0 b1 b2 b3
q q P1 2 P3 2 Firs prepare: i=0 |bi| |0i + i=2 |bi| |1i q q P1 2 P3 2 Map i=0 |bi| |0i 7→ |b0||00i + |b1||01i and i=2 |bi| |10i 7→ |b2||11i + |b3||01i
Add phases to get b0|00i + b1|01i + b2|10i + b3|11i 23 / 24 Cost is about the depth: log(dimension) More about this in Ewin Tang’s afternoon talk
On-line updates to the data structure Data structure
P3 2 i=0 |bi|
P1 2 P3 2 i=0 |bi| i=2 |bi|
b0 b1 b2 b3
24 / 24 On-line updates to the data structure Data structure
P3 2 i=0 |bi|
P1 2 P3 2 i=0 |bi| i=2 |bi|
b0 b1 b2 b3
Cost is about the depth: log(dimension) More about this in Ewin Tang’s afternoon talk
24 / 24