Arnoldi Methods for the Eigenvalue Problem, Generalized Or Not

Total Page:16

File Type:pdf, Size:1020Kb

Arnoldi Methods for the Eigenvalue Problem, Generalized Or Not ARNOLDI METHODS FOR THE EIGENVALUE PROBLEM, GENERALIZED OR NOT MARKO HUHTANEN∗ Abstract. Arnoldi methods are devised for polynomially computing either a Hessenberg- triangular form or a triangular-triangular+rank-one form to numerically solve large eigenvalue prob- lems, generalized or not. If generalized, then no transformations into a standard form take place. If standard, then a new Arnoldi method arises. The equivalence transformations involved are unitary, or almost, and polynomially generated. The normal generalized eigenvalue structure is identified for which separate Arnoldi methods of moderate complexity are derived. Averaging elements of the n Grassmannian Grk( C ) is suggested to optimally generate subspaces in the codomain. Then the two canonical forms described get averaged, giving rise to an optimal Arnoldi method. For k = 1 this replaces the Rayleigh quotient by yielding the field of directed norms instead of the field of values. Key words. generalized eigenvalue problem, Arnoldi method, Krylov subspace, polynomial method, normal eigenvalue problem, optimality, Rayleigh quotient, Grassmannian average AMS subject classifications. 65F15, 65F50 1. Introduction. Consider a large eigenvalue problem Mx = λNx (1.1) of computing a few eigenvalues λ and corresponding eigenvectors x Cn. Here the matrices M,N Cn×n are possibly sparse as is often the case in applications∈ [18, 20, 24]. Using the generalized∈ Schur decomposition as a starting point, an algorithm for n×k the task consists of first computing Qk C with orthonormal columns, or almost, for k n. Depending on the cost, a number∈ of suggestions have been made to this end; see≪ [7] and [2, Chapter 8] as well as [22]. (There is a vast literature on iterative methods for eigenvalue problems; for a concise review on the generalized eigenvalue n×k problem, see, e.g., [23, Section 7].) Thereafter Zk C with orthonormal columns is generated. The eigenvalue problem then gets∈ reduced in dimension once some linear combinations A and (nonsingular) B of M and N are compressed by forming the partial equivalence transformation ∗ ∗ Zk AQk and Zk BQk. (1.2) In this paper, Arnoldi methods are devised for polynomially computing Qk and Zk in the domain and codomain. Without any transformations into a standard form taking place, classical iterations get covered in a natural way. A new Arnoldi method for the standard eigenvalue problem arises. The normal generalized eigenvalue structure is identified, admitting devising Arnoldi methods of moderate computational complexity for interior eigenvalues. For a given Qk, a criterion for optimally computing Zk is n devised based on averaging two elements of the Grassmannian Grk( C ). An optimal Arnoldi method for the eigenvalue problem arises. Altogether, it is shown that the standard and generalized eigenvalue problems should not be treated separately, and that the classical Arnoldi method is not optimal. To derive Arnoldi methods for (1.1), for appropriate Krylov subspaces inspect the associated resolvent operator λ R(A, B, λ) = (λB A)−1. 7−→ − ∗ Division of Mathematics, Department of Electrical and Information Engineering, University of Oulu, 90570 Oulu 57, Finland, ([email protected]). 1 2 MARKO HUHTANEN Analogously to the “shift-and-invert” paradigm, here the choice of the linear com- binations A and B is a delicate issue in view of convergence. The Neumann series expansion of the resolvent operator reveals a Krylov space structure for computing Qk in the domain. For computing Zk in the codomain, two Krylov subspaces surface in a natural way, both giving rise to finitely computable canonical forms. With the first choice for a Krylov subspace to compute Zk, the matrix com- pressions attain the well-known Hessenberg-triangular form. Recall that the classical Arnoldi method is an iterative alternative to using elementary unitary transformations to convert a single matrix into a Hessenberg form. For the generalized eigenvalue prob- lem, elementary unitary transformations can be used to bring a pair of matrices into a Hessenberg-triangular form [16].1 Here an Arnoldi method for iteratively comput- ing this form is devised. Even though we are dealing with the generalized eigenvalue problem, polynomially the Ritz values result from a standard Arnoldi minimization problem. (In [22, Section 3] the same partial form is derived purely algebraically, without establishing a polynomial connection.) Polynomials are important by the fact that they link iterative methods with other fields of mathematics, offering a large pool of tools accordingly; see [15, 11] and references therein. With the second choice for a Krylov subspace to compute Zk, the matrix com- pressions attain a triangular-triangular+rank-one form. We have encountered such a structure neither in connection with the generalized eigenvalue problem nor with the classical Arnoldi method. Because of the symmetric roles of the matrices A and B, it can argued that it is equally natural as the Hessenberg-triangular form by the fact that the Ritz values arise polynomially as well, although somewhat surprisingly, through a GMRES (generalized minimal residual) minimization problem. This provides hence an explicit link between the GMRES method and eigenvalue approximations. In the case of a standard eigenvalue problem, a triangular-companion matrix form arises. There are Krylov subspace methods for computing Ritz values for normal matri- ces [11]. These methods can be applied to normal generalized eigenvalue problems. A generalized eigenvalue problem is said to be normal if the generalized Schur decom- position involves diagonal matrices. This leads to a natural extension of the classical notion of normality providing computationally by far the most attractive setting. That is, for interior eigenvalues the normal case turns out to be strikingly different by not requiring applying shift-and-invert techniques. In this context, the so-called folded spectrum method can be treated as a special case. Partly motivated by a possible inaccurate computation of Qk in forming the par- tial equivalence transformation (1.2), a criterion for optimally computing Zk for a given Qk is devised. Formulated in terms of Grassmannians, it consists of compar- ing two k-dimensional subspaces of Cn so as to simultaneously compute a maximal projection onto them in terms of applying Zk. For the two Arnoldi methods just described, the Zk’s suggested get replaced with their average such that the resulting Arnoldi method can be regarded as optimal. In particular, for k = 1 solving this for the standard eigenvalue problem replaces the Rayleigh quotient q∗Aq with q∗Aq Aq q∗Aq k k | | such that the field of values accordingly becomes the field of directed norms. Con- sequently, the corresponding power method converges more aggressively towards an 1Serves as a “front end” decomposition before executing the actual QZ iteration yielding the generalized Schur decomposition [8, p.380]. ARNOLDI METHODS FOR EIGENVALUES 3 exterior eigenvalue. This is not insignificant since the simple power method effect is the reason behind the success of more complex iterative methods for eigenvalues. For k 2, optimally computing Zk for a given Qk is not costly and thereby pro- vides a≥ noteworthy option for any method relying on the use of a partial equivalence transformation.2 In Section 2 Krylov subspaces and two Arnoldi methods for the generalized eigen- value problem are described. Although being polynomial methods both, the give rise to very different canonical forms. Section 3 deals with the normal generalized eigen- value problem and how shift-and-invert can then be avoided. In Section 4 a way to perform the partial equivalence transformation optimally is derived. An optimal Arnoldi method arises. The field of directed norms is introduced. In Section 5 nu- merical experiments are conducted. 2. Arnoldi methods for the eigenvalue problem. In the generalized eigen- value eigenvalue problem (1.1), the task is to locate the singular elements of the nonsingular3 two dimensional matrix subspace = span M,N . (2.1) V { } In practice this is solved by computing invertible matrices X, Y Cn×n so as to perform an equivalence transformation ∈ = X Y (2.2) W V whose singular elements are readily identifiable. For problems of moderate size, a numerically reliable way [16] to achieve this is based on fixing a basis of and then computing the generalized Schur decomposition of the following theorem.V Theorem 2.1. Suppose A, B Cn×n. Then there exist unitary matrices Q and Z such that Z∗AQ = T and Z∗BQ∈= S are upper triangular both. If the unitary matrices Q and Z can be chosen in such a way that T and S are diagonal, then there are good reasons to call the matrix subspace normal. Correspondingly, the generalized eigenvalue problem (1.1) is then said to beV normal. This yields a very natural extension the notion of normal matrix; see Section 3. For large problems, computing the generalized Schur decomposition is typically not realistic. This paper is concerned with devicing Arnoldi methods to construct a partial equivalence transformation of , i.e., with iteratively computing matrices X∗, Y Cn×k, with orthonormal columnsV for k n, to accordingly perform a dimension∈ reduction in (2.2). 4 Bearing in mind that≪ the generalized Schur decom- position carries two unitary matrices, i.e., two different orthonormal bases, a natural Arnoldi method to this end should involve two Krylov subspaces. 2.1. Krylov subspaces of the eigenvalue problem. To compute Krylov sub- spaces, fix a basis of by setting V A = aM + bN and B = cM + dN (2.3) for some scalars a,b,c,d C such that det a b =0. The choice of these parameters ∈ c d 6 is a delicate issue determined by which eigenvalues are being searched. Expressed by 2Practically all the methods for the eigenvalue problem can be recast in such a way that they rely on the use of a partial equivalence transformation. 3Nonsingular means that V contains invertible elements.
Recommended publications
  • On-The-Fly Computation of Frontal Orbitals in Density Matrix
    On-the-fly computation of frontal orbitals in density matrix expansions Anastasia Kruchinina,∗ Elias Rudberg,∗ and Emanuel H. Rubensson∗ Division of Scientific Computing, Department of Information Technology, Uppsala University, Sweden E-mail: [email protected]; [email protected]; [email protected] Abstract Linear scaling density matrix methods typically do not provide individual eigen- vectors and eigenvalues of the Fock/Kohn-Sham matrix, so additional work has to be performed if they are needed. Spectral transformation techniques facilitate compu- tation of frontal (homo and lumo) molecular orbitals. In the purify-shift-and-square method the convergence of iterative eigenvalue solvers is improved by combining recur- sive density matrix expansion with the folded spectrum method [J. Chem. Phys. 128, 176101 (2008)]. However, the location of the shift in the folded spectrum method and the iteration of the recursive expansion selected for eigenpair computation may have a significant influence on the iterative eigenvalue solver performance and eigenvector accuracy. In this work, we make use of recent homo and lumo eigenvalue estimates arXiv:1709.04900v1 [physics.comp-ph] 14 Sep 2017 [SIAM J. Sci. Comput. 36, B147 (2014)] for selecting shift and iteration such that homo and lumo orbitals can be computed in a small fraction of the total recursive expansion time and with sufficient accuracy. We illustrate our method by performing self-consistent field calculations for large scale systems. 1 1 Introduction Computing interior eigenvalues of large matrices is one of the most difficult problems in the area of numerical linear algebra and it appears in many applications.
    [Show full text]
  • On Operator Whose Norm Is an Eigenvalue
    BULLETIN OF THE INSTITUTE OF MATHEMATICS ACADEMIA SINICA Volume 31, Number 1, March 2003 ON OPERATOR WHOSE NORM IS AN EIGENVALUE BY C.-S. LIN (林嘉祥) Dedicated to Professor Carshow Lin on his retirement Abstract. We present in this paper various types of char- acterizations of a bounded linear operator T on a Hilbert space whose norm is an eigenvalue for T , and their consequences are given. We show that many results in Hilbert space operator the- ory are related to such an operator. In this article we are interested in the study of a bounded linear operator on a Hilbert space H whose norm is an eigenvalue for the operator. Various types of characterizations of such operator are given, and consequently we see that many results in operator theory are related to such operator. As the matter of fact, our study is motivated by the following two results: (1) A linear compact operator on a locally uniformly convex Banach sapce into itself satisfies the Daugavet equation if and only if its norm is an eigenvalue for the operator [1, Theorem 2.7]; and (2) Every compact operator on a Hilbert space has a norm attaining vector for the operator [4, p.85]. In what follows capital letters mean bounded linear operators on H; T ∗ and I denote the adjoint operator of T and the identity operator, re- spectively. We shall recall some definitions first. A unit vector x ∈ H is Received by the editors September 4, 2001 and in revised form February 8, 2002. 1991 AMS Subject Classification. 47A10, 47A30, 47A50.
    [Show full text]
  • Pseudospectra and Linearization Techniques of Rational Eigenvalue Problems
    Pseudospectra and Linearization Techniques of Rational Eigenvalue Problems Axel Torshage May 2013 Masters’sthesis 30 Credits Umeå University Supervisor - Christian Engström Abstract This thesis concerns the analysis and sensitivity of nonlinear eigenvalue problems for matrices and linear operators. The …rst part illustrates that lack of normal- ity may result in catastrophic ill-conditioned eigenvalue problem. Linearization of rational eigenvalue problems for both operators over …nite and in…nite dimen- sional spaces are considered. The standard approach is to multiply by the least common denominator in the rational term and apply a well known linearization technique to the polynomial eigenvalue problem. However, the symmetry of the original problem is lost, which may result in a more ill-conditioned problem. In this thesis, an alternative linearization method is used and the sensitivity of the two di¤erent linearizations are studied. Moreover, this work contains numeri- cally solved rational eigenvalue problems with applications in photonic crystals. For these examples the pseudospectra is used to show how well-conditioned the problems are which indicates whether the solutions are reliable or not. Contents 1 Introduction 3 1.1 History of spectra and pseudospectra . 7 2 Normal and almost normal matrices 9 2.1 Perturbation of normal Matrix . 13 3 Linearization of Eigenvalue problems 16 3.1 Properties of the …nite dimension case . 18 3.2 Finite and in…nite physical system . 23 3.2.1 Non-damped polynomial method . 24 3.2.2 Non-damped rational method . 25 3.2.3 Dampedcase ......................... 27 3.2.4 Damped polynomial method . 28 3.2.5 Damped rational method .
    [Show full text]
  • The Significance of the Imaginary Part of the Weak Value
    The significance of the imaginary part of the weak value J. Dressel and A. N. Jordan Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627, USA (Dated: July 30, 2018) Unlike the real part of the generalized weak value of an observable, which can in a restricted sense be operationally interpreted as an idealized conditioned average of that observable in the limit of zero measurement disturbance, the imaginary part of the generalized weak value does not provide information pertaining to the observable being measured. What it does provide is direct information about how the initial state would be unitarily disturbed by the observable operator. Specifically, we provide an operational interpretation for the imaginary part of the generalized weak value as the logarithmic directional derivative of the post-selection probability along the unitary flow generated by the action of the observable operator. To obtain this interpretation, we revisit the standard von Neumann measurement protocol for obtaining the real and imaginary parts of the weak value and solve it exactly for arbitrary initial states and post-selections using the quantum operations formalism, which allows us to understand in detail how each part of the generalized weak value arises in the linear response regime. We also provide exact treatments of qubit measurements and Gaussian detectors as illustrative special cases, and show that the measurement disturbance from a Gaussian detector is purely decohering in the Lindblad sense, which allows the shifts for a Gaussian detector to be completely understood for any coupling strength in terms of a single complex weak value that involves the decohered initial state.
    [Show full text]
  • On Commuting Compact Self-Adjoint Operators on a Pontryagin Space∗
    On commuting compact self-adjoint operators on a Pontryagin space¤ Tirthankar Bhattacharyyay Poorna Prajna Institute for Scientific Research, 4 Sadshivnagar, Bangalore 560080, India. e-mail: [email protected] and TomaˇzKoˇsir Department of Mathematics, University of Ljubljana, Jadranska 19, 1000 Ljubljana, Slovenia. e-mail: [email protected] July 20, 2000 Abstract Suppose that A1;A2;:::;An are compact commuting self-adjoint linear maps on a Pontryagin space K of index k and that their joint root subspace M0 at the zero eigen- value in Cn is a nondegenerate subspace. Then there exist joint invariant subspaces H and F in K such that K = F © H, H is a Hilbert space and F is finite-dimensional space with k · dim F · (n + 2)k. We also consider the structure of restrictions AjjF in the case k = 1. 1 Introduction Let K be a Pontryagin space whose index of negativity (henceforward called index) is k and A be a compact self-adjoint operator on K with non-degenarate root subspace at the eigenvalue 0. Then K can be decomposed into an orthogonal direct sum of a Hilbert subspace and a Pontryagin subspace both of which are invariant under A and this Pontryagin subspace has dimension at most 3k. This has many applications among which we mention the study of elliptic multiparameter problems [2]. Binding and Seddighi gave a complete proof of this decomposition in [3] and in fact proved that non-degenaracy of the root subspace at 0 is necessary and sufficient for such a decomposition. They show ¤ Research was supported in part by the Ministry of Science and Technology of Slovenia and the National Board of Higher Mathematics, India.
    [Show full text]
  • On Spectra of Quadratic Operator Pencils with Rank One Gyroscopic
    On spectra of quadratic operator pencils with rank one gyroscopic linear part Olga Boyko, Olga Martynyuk, Vyacheslav Pivovarchik January 3, 2018 Abstract. The spectrum of a selfadjoint quadratic operator pencil of the form λ2M λG A is investigated where M 0, G 0 are bounded operators and A is selfadjoint − − ≥ ≥ bounded below is investigated. It is shown that in the case of rank one operator G the eigen- values of such a pencil are of two types. The eigenvalues of one of these types are independent of the operator G. Location of the eigenvalues of both types is described. Examples for the case of the Sturm-Liouville operators A are given. keywords: quadratic operator pencil, gyroscopic force, eigenvalues, algebraic multi- plicity 2010 Mathematics Subject Classification : Primary: 47A56, Secondary: 47E05, 81Q10 arXiv:1801.00463v1 [math-ph] 1 Jan 2018 1 1 Introduction. Quadratic operator pencils of the form L(λ) = λ2M λG A with a selfadjoint operator A − − bounded below describing potential energy, a bounded symmetric operator M 0 describing ≥ inertia of the system and an operator G bounded or subordinate to A occur in different physical problems where, in most of cases, they have spectra consisting of normal eigenvalues (see below Definition 2.2). Usually the operator G is symmetric (see, e.g. [7], [20] and Chapter 4 in [13]) or antisymmetric (see [16] and Chapter 2 in [13]). In the first case G describes gyroscopic effect while in the latter case damping forces. The problems in which gyroscopic forces occur can be found in [2], [1], [21], [23], [12], [10], [11], [14].
    [Show full text]
  • On Almost Normal Matrices
    W&M ScholarWorks Undergraduate Honors Theses Theses, Dissertations, & Master Projects 6-2013 On Almost Normal Matrices Tyler J. Moran College of William and Mary Follow this and additional works at: https://scholarworks.wm.edu/honorstheses Part of the Mathematics Commons Recommended Citation Moran, Tyler J., "On Almost Normal Matrices" (2013). Undergraduate Honors Theses. Paper 574. https://scholarworks.wm.edu/honorstheses/574 This Honors Thesis is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Undergraduate Honors Theses by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. On Almost Normal Matrices A thesis submitted in partial fulllment of the requirement for the degree of Bachelor of Science in Mathematics from The College of William and Mary by Tyler J. Moran Accepted for Ilya Spitkovsky, Director Charles Johnson Ryan Vinroot Donald Campbell Williamsburg, VA April 3, 2013 Abstract An n-by-n matrix A is called almost normal if the maximal cardinality of a set of orthogonal eigenvectors is at least n−1. We give several basic properties of almost normal matrices, in addition to studying their numerical ranges and Aluthge transforms. First, a criterion for these matrices to be unitarily irreducible is established, in addition to a criterion for A∗ to be almost normal and a formula for the rank of the self commutator of A. We then show that unitarily irreducible almost normal matrices cannot have flat portions on the boundary of their numerical ranges and that the Aluthge transform of A is never normal when n > 2 and A is unitarily irreducible and invertible.
    [Show full text]
  • Topological Analysis of Eigenvalues in Engineering Computations 386 R.V.N
    The current issue and full text archive of this journal is available at http://www.emerald-library.com EC 17,4 Topological analysis of eigenvalues in engineering computations 386 R.V.N. Melnik Received October 1999 Mathematical Modelling of Industrial Processes, CSIRO Mathematical Accepted February 2000 and Information Sciences ± Sydney, Macquarie University Campus, North Ryde, NSW, Australia Keywords Eigenvalues, Analysis, Topology, Numerical methods, Engineering Abstract The dynamics of coupling between spectrum and resolvent under -perturbations of operator and matrix spectra are studied both theoretically and numerically. The phenomenon of non-trivial pseudospectra encountered in these dynamics is treated by relating information in the complex plane to the behaviour of operators and matrices. On a number of numerical results we show how an intrinsic blend of theory with symbolic and numerical computations can be used effectively for the analysis of spectral problems arising from engineering applications. 1. Introduction The information on spectra of matrices and operators has primary importance in many branches of engineering sciences. In particular, this information is indispensable in analysing stability of mechanical systems, fluid flows, and electronic devices. It is at the heart of spectral-type (including pseudospectral) methods widely used for approximations of differential and integral operators arising from engineering applications (Fornberg, 1996; Quarteroni and Valli, 1997). The study of eigenvalues has been revolutionised by the ready availability of computing power. Surprising results, however, are quick to appear in engineering practice and present well-known, yet non-trivial, difficulties. These difficulties often come from the following standard procedure applied to the stability analysis of many engineering problems (Baggett et al., 1995): .
    [Show full text]
  • Spectral Theory of Elliptic Differential Operators with Indefinite Weights
    Proceedings of the Royal Society of Edinburgh, 143A, 21–38, 2013 Spectral theory of elliptic differential operators with indefinite weights Jussi Behrndt Institut f¨ur Numerische Mathematik, Technische Universit¨at Graz, Steyrergasse 30, 8010 Graz, Austria ([email protected]) (MS received 23 June 2011; accepted 6 December 2011) The spectral properties of a class of non-self-adjoint second-order elliptic operators with indefinite weight functions on unbounded domains Ω are investigated. It is shown, under an abstract regularity assumption, that the non-real spectrum of the associated elliptic operators in L2(Ω) is bounded. In the special case where Ω = Rn decomposes into subdomains Ω+ and Ω− with smooth compact boundaries and the weight function is positive on Ω+ and negative on Ω−, it turns out that the non-real spectrum consists only of normal eigenvalues that can be characterized with a Dirichlet-to-Neumann map. 1. Introduction This paper studies the spectral properties of partial differential operators associated to second-order elliptic differential expressions of the form 1 n ∂ ∂ Lf = (f),(f)=− ajk f + af, (1.1) r ∂xj ∂xk j,k=1 with variable coefficients ajk, a and a weight function r defined on some bounded or unbounded domain Ω ⊂ Rn, n>1. It is assumed that the differential expression is formally symmetric and uniformly elliptic. The peculiarity here is that the function r is allowed to have different signs on subsets of positive Lebesgue measure of Ω. For this reason L is said to be an indefinite elliptic differential expression. The differential expression in (1.1) gives rise to a self-adjoint unbounded oper- ator A in the Hilbert space L2(Ω) which is defined on the dense linear subspace { ∈ 1 ∈ 2 } dom A = f H0 (Ω): (f) L (Ω) .
    [Show full text]
  • Nonselfadjoint Operators in Diffraction and Scattering *)
    Math. Meth. in the Appl. Sci. 2 (1980) 327-346 AMS subject classification: 47A50,47A55,47A40,47A70 Nonselfadjoint Operators in Diffraction and Scattering *) A. G. Ram, Ann Arbor, MI Communicated by R. P. Gilbert Contents 1. Introduction 2. When do the eigenvectors of T and A form a basis of H? 3. When do T and A have no root vectors? 4. What can be said about the location and properties of the complex poles? 5. How to calculate the poles of the Green function? Do the poles depend continuously on the boundary of the obstacle? Appendix 1. Losses in open resonators Appendix 2. An example on complex scaling Appendix 3. Variational principles for eigenvalues of compact nonselfadjoint operators Bibliographical note Unsolved problems References 0 1 Introduction Consider the following problem (1) (A + k3u = o inQ, (2) aulaN = f onK (3) Ixl(au/alxl- iku) -, 0 as [XI+ 03, where $2 is an unbounded domain with a smooth closed compact surface K r E c2. If we look for a solution in the form *) This work was supported by AFOSR F4962079C0128. 0170-4214/80/03 0327 - 20 $ 04.00/0 @ 1980 B. G. Teubner Stuttgart 328 A. G. Ramm where If the boundary condition is of the form (7) u = f on r, then the integral equation for g takes the form where If one wishes to solve equations (8), (5) by means of expansions in root vectors, one must prove that the root vectors of operators A and T form a basis of H = L2(r).Both operators are compact and nonselfadjoint.
    [Show full text]
  • Inverse-Covariance Matrix of Linear Operators for Quantum Spectrum Scanning
    Inverse-covariance matrix of linear operators for quantum spectrum scanning Hamse Y. Mussa1, Jonathan Tennyson2, and Robert C. Glen1 1Unilever Centre for Molecular Sciences Informatics Department of Chemistry, University of Cambridge Lens¯eld Road, Cambridge CB2 1EW, UK 2Department of Physics and Astronomy University College London, London WC1E 6BT, UK E-mail: [email protected] Abstract. It is demonstrated that the SchrÄodinger operator in H^ j Ãk >= Ek j Ãk > can be associated with a covariance matrix whose eigenvalues are the squares of the spectrum σ(H^ + I³) where ³ is an arbitrarily chosen shift. An e±cient method for extracting σ(H^ ) components, in the vicinity of ³, from a few specially selected eigenvectors of the inverse of the covariance matrix is derived. The method encapsulates (and improves on) the three most successful quantum spectrum scanning schemes: Filter-Diagonalization, Shift-and-invert Lanczos and Folded Spectrum Method. It gives physical insight into the scanning process. The new method can also be employed to probe the nature of underlying potential energy surfaces. A sample application to the near-dissociation vibrational spectrum of the HOCl molecule is presented. PACS numbers: 03.65.-w,02.70.-c,02.10.10.Yn,02.30.Tb,02.50.Sk,02.30.Zz,02.30Yy Keywords: Eigensolver, Mixed state, Quantum Control Inverse-covariance matrix of linear operators for quantum spectrum scanning 2 1. Introduction Quantum mechanics provides our understanding of the behaviour of microscopic objects such as atoms, molecules and their constituents. Thus non-relativistic studies of these objects and any process involving them essentially requires solving the appropriate SchrÄodingerequation.
    [Show full text]
  • Minutes of Final Special NCRC Meeting in Physics
    DEPARTMENT OF PHYSICS SCHEME OF STUDIES FOR BS PHYSICS. Year Semester Course Code Course Title Crd. Hours PHY-102 Mechanics 4 PHY-103 Lab-I 1 ENG-101 English-I 3 MATH-101 Calculus-I 3 1st CS-101 Fundamentals of Computing 3 CHEM-101 Chemistry-I 3 ISL-101 (Islamic Studies) Islamiyat 2 Total Credit Hours 19 1st year PHY-104 Electricity & Magnetism 4 PHY-105 Lab-II 1 ENG-102 English –II 3 2nd MATH-102 Calculus-II 3 PST-101 Pakistan Studies 2 CHEM-102 Chemistry-II 3 Total Credit Hours 16 PHY-106 Waves & Oscillations 3 PHY-107 Heat & Thermodynamics 4 PHY-108 Lab-III 1 3rd ENG-103 English-III 3 MATH-103 Differential Equations 3 MATH-104 Complex Variable, Infinite & Fourier series 3 Total Credit Hours 17 2nd year PHY-109 Optics 3 PHY-110 Modern Physics 3 PHY-111 Lab-IV 1 4th MATH-105 Linear Algebra 3 STAT-101 Probability and Statistics 3 CS-112 Programming Fundamentals 3 Total Credit Hours 16 PHY-301 Mathematical Methods of Physics-I 3 5th 3rd year PHY-302 Electromagnetic Theory-I 3 1 PHY-303 Classical Mechanics 3 PHY-304 Basic Electronics 3 PHY-305 Lab-V 2 PSY-301 Social Psychology 3 Total Credit Hours 17 PHY-306 Mathematical Methods of Physics-II 3 PHY-307 Quantum Mechanics-I 3 PHY-308 Electromagnetic Theory-II 3 6th PHY-309 Statistical Physics 3 PHY-310 Lab-VI 2 PHIL-302 Ethics 2 Total Credit Hours 16 PHY-311 Quantum Mechanics-II 3 PHY-312 Atomic & Molecular Physics 3 PHY-313 Solid State Physics-I 3 7th PHY-314 Lab-VII 2 Elective-I 3 Elective-II 3 Total Credit Hours 17 4th year PHY-315 Solid State Physics-II 3 PHY-316 Nuclear Physics 3 Elective-III
    [Show full text]