
Calculating vibrational spectra of molecules using tensor train decomposition Maxim Rakhuba∗ Skolkovo Institute of Science and Technology, Skolkovo Innovation Center, Building 3, 143026 Moscow, Russia. Ivan Oseledetsy Skolkovo Institute of Science and Technology, Skolkovo Innovation Center, Building 3, 143026 Moscow, Russia. and Institute of Numerical Mathematics of Russian Academy of Sciences, Gubkina St. 8, 119333, Moscow, Russia. (Dated: September 7, 2016) We propose a new algorithm for calculation of vibrational spectra of molecules using tensor train decomposition. Under the assumption that eigenfunctions lie on a low-parametric manifold of low- rank tensors we suggest using well-known iterative methods that utilize matrix inversion (LOBPCG, inverse iteration) and solve corresponding linear systems inexactly along this manifold. As an application, we accurately compute vibrational spectra (84 states) of acetonitrile molecule CH3CN on a laptop in one hour using only 100 MB of memory to represent all computed eigenfunctions. I. INTRODUCTION we propose a concept of a manifold precondi- • tioner that explicitly utilizes information about In this work we consider time-independent Schr¨odinger the size of the TT-representation. We use the equation to calculate vibrational spectra of molecules. manifold preconditioner for a tensor version of lo- The goal is to find smallest eigenvalues and correspond- cally optimal block preconditioned conjugate gra- ing eigenfunctions of the Hamiltonian operator. The key dient method (LOBPCG) [7]. We will refer to assumption that we use is that potential energy surface this approach as manifold-preconditioned LOBPCG (PES) can be approximated by a small number of sum- (MP LOBPCG). The approach is first illustrated of-product functions. This holds, e.g. if PES is a poly- on computation of a single eigenvector (Section nomial. III C) and then extended to the block case (Sec- We discretize the problem using the discrete variable tion IV B). representation (DVR) scheme [1]. The problem is that we propose tensor version of simultaneous in- the storage required for each eigenvector grows expo- • nentially with dimension d as nd, where n is number verse iteration (also known as block inverse power of grid points in each dimension. Even for the DVR method), which significantly improves accuracy of scheme where 15 grid points in each dimension is often the proposed MP LOBPCG. Similarly to the man- sufficient to provide very accurate results we would get ifold preconditioner the inversion is done using the 1 PB of storage for a 12 dimensional problem. This a priori information that the solution belongs to issue≈ is often referred to as the curse of dimensionality. a certain low-parametric manifold. We will refer To avoid exponential growth of storage we use tensor to this method as manifold-projected simultaneous train (TT) decomposition [2] to approximate the oper- inverse iteration (MP SII). The approach is first ator and the eigenfunctions in the DVR representation, illustrated on computation of a single eigenvector which is known to have exponential convergence rate. It (Section III B) and then extended to the block case is important to note that the TT-format is algebraically (Section IV A). equivalent to the Matrix Product State format (MPS), we calculate vibrational spectra of acetonitrile which has been used for a long time in quantum informa- • molecule CH CN using the proposed approach tion theory and solid state physics to approximate certain 3 (Section V B). The results are more accurate than arXiv:1605.08422v2 [math.NA] 6 Sep 2016 wavefunctions [3, 4] (DMRG method), see the review [5] those of the Smolyak grid approach [8] but with for more details. much less storage requirements, and more accu- Prior research [6] has shown that the eigenfunctions of- rate than the recent memory-efficient H-RRBPM ten can be well approximated in the TT-format, i.e. they method [9], which is also based on tensor decom- lie on a certain low-dimensional non-linear manifold. The positions. We note that the Smolyak grid approach key question is how to utilize this a priori knowledge in does not require PES to be approximated by a small computations. We propose to use well-established itera- number of sum-of-product functions. tive methods that utilize matrix inversion and solve cor- responding linear systems inexactly along the manifold to accelerate the convergence. Our main contributions are: II. DISCRETIZATION We follow [9] and consider Schr¨odingerequation with ∗ [email protected] omitted π π cross terms and the potential-like term y [email protected] in the normal− coordinate kinetic energy operator. The 2 Hamiltonian in this case looks as where the shift σ is an approximation to the largest eigenvalue of . The matrix-by-vector product is the d 2 1 X @ bottleneck operationH in this procedure. This method = !i + V (q1; : : : ; qd); (1) H −2 @q2 i=1 i was successfully applied to the calculation of vibrational spectra in [10]. The eigenvectors in this work are repre- where V denotes potential energy surface (PES). sented as sum-of-products, which allows for fast matrix- We discretize the problem (1) using the discrete vari- by-vector multiplications. Despite the ease of implemen- able representation (DVR) scheme on the tensor product tation and efficiency of each iteration, the convergence of of Hermite meshes [1] such that each unknown eigenfunc- this method requires thousands of iterations. tion is represented as Instead of power iteration we use inverse iteration [11] n X (k) −1 Ψk(q1; : : : ; qd) i (q1) : : : i (qd); k+1 = ( σ ) k; ≈ Xi1;:::;id 1 d X H − I X (5) i ;:::;i =1 p 1 d k+1 := k+1= k+1; k+1 ; (2) X X hX X i where i(qi) denotes one-dimensional DVR basis func- which is known to have much faster convergence if a tion. We call arising multidimensional arrays tensors. good approximation σ to the required eigenvalue E(1) The Hamiltonian (1) consists of two terms: the is known. Question related the solution of linear systems Laplacian-like part and the PES. It is well-known that in the TT-format will be discussed in Section III B. Con- the Laplacian-like term can be written in the Kronecker vergence of the inverse iteration is determined by ratio product form (1) E σ = D1 I I + + I I Dd; ρ = − ; (6) D ⊗ ⊗ · · · ⊗ ··· ⊗ · · · ⊗ ⊗ E(2) σ − where Di is the one dimensional discretization of the i-th mode. where E(2) is the next closest to E(1) eigenvalue. Thus, The DVR discretization of the PES is represented as a σ has to be closer to E(1) than to E(2) for the method to tensor . The operator corresponding to the multiplica- converge. However, the closer σ to E(1) is, the more dif- tion byV is diagonal. Finally the Hamiltonian is written ficult to solve the linear system with matrix ( σ ) as V it is. Therefore, typically this system is solvedH − inex-I actly [12, 13]. Parameter σ can also depend on the iter- = D1 I I + +I I Dd +diag ( ) : (3) H ⊗ ⊗· · ·⊗ ··· ⊗· · ·⊗ ⊗ V ation number (Rayleigh quotient iteration), however in For our purposes it is convenient to treat not as a 2D our experiments (Section V) constant choice of σ yields d d H convergence in 5 iterations. Rn Rn matrix, but as a multidimensional operator !n×···×n n×···×n As it follows from (6) for the inverse iteration to : R R . In this case the discretized Schr¨odingerequationH ! has the form converge fast a good initial approximation has to be found. To get initial approximation we propose to use n X locally optimal block preconditioned conjugate gradient i ;:::;i ;j ;:::;j j ;:::;j = E i ;:::;i : (4) H 1 d 1 d X 1 d X 1 d (LOBPCG) method as it is relatively easy to implement j1;:::;jd=1 in tensor formats, and a preconditioner can be explicitly Hereinafter we use notation ( ) implying matrix-by- utilized. vector product from (4). UsingH X this notation (4) can be equivalently written as A. TT-format ( ) = E : H X X The problem with straightforward usage of the itera- tive processes above is that we need to store an eigen- III. COMPUTING A SINGLE EIGENVECTOR d vector k. The storage of this vector is (n ), which is prohibitiveX even for n = 15 and d = 10.O Therefore we In this section we discuss how to solve the Schr¨odinger need a compact representation of an eigenfunction which equation (4) numerically and present our approach for allows to do inversion and basic linear algebra operations finding a single eigenvector. The case of multiple eigen- in a fast way. For this purpose we use the tensor train values is discussed in Section IV. (TT, MPS) format [2]. Tensor is said to be in the The standard way to find required number of smallest TT-format if it is represented as X eigenvalues is to use iterative methods. The simplest iterative method of finding one smallest eigenvalue is the (1) (2) (d) i1;i2;:::;id , X (i1) X (i2) :::X (id); (7) shifted power iteration X where X(k)(i ) rk−1×rk , r = r = 1, i = 1; : : : ; n. = ( σ ) ; k R 0 d k k+1 k Matrices X (i )2 are called TT-cores and r are called X H − pI X k k k k+1 := k+1= k+1; k+1 ; TT-ranks. For simplicity we assume that rk = r, k = X X hX X i 3 2; : : : ; d 1 and call r the TT-rank. In numerical experi- The minimization over a single core (see Appendix A) is a − ments we use different mode sizes ni, i = 1; : : : ; d, but for standard linear least squares problem with the unknown 2 simplicity we will use notation n = maxi ni in complexity vector that has the size nr { size of the corresponding estimates.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-