Math 432 - Real Analysis II Solutions to Final Examination

Total Page:16

File Type:pdf, Size:1020Kb

Math 432 - Real Analysis II Solutions to Final Examination Math 432 - Real Analysis II Solutions to Final Examination Question 1. For the below statements, decide if they are True or False. If True, provide a proof or reason. If False, provide a counterexample or disproof. (a) The vector space M2(R) of 2 × 2-matrices with real entries is an inner product space with dot product given by hA; Bi = Tr(AB); where Tr is the trace of a matrix. (b) Let V be an n-dimensional inner product space. For any x 2 V , consider the subspace U = span(x): Then, dim(U ?) = n − 1: (c) Let V be a normed vector space. V is an inner product space with norm induced by the inner product if and only if the norm satisfies the parallelogram law. Solution 1. (a) False. To see this, consider the matrix 0 −1 A = : 1 0 Notice that −1 0 A · A = : 0 −1 However, Tr(A · A) = −1 − 1 = −2; which contradicts the positive-definite axiom. Thus, this definition of h·; ·i is not an inner product. (b) False. Consider the zero vector x = 0. Then, U = spanf0g = f0g; the zero subspace. However, f0g? = V , so the dimension of its orthogonal complement is n, not n − 1. Note: this is the only counterexample to this claim. (c) True. It is always true that any inner product space is a normed vector space with norm induced by the inner product. The converse is true by the Jordan-Fr´echet-Von Neumman Theorem if the parallelogram law holds. Question 2. In this question, we will investigate the inner product space M2(R) of 2 × 2-matrices with dot product given by hA; Bi = Tr(ABT ); where T indicates the transpose of a matrix and Tr denotes its trace (sum of diagonal entires). We have previously proven that M2(R) with is a 4-dimensional inner product space with this dot product. (a) Consider the subset W ⊂ M2(R) of traceless matrices. That is, W = fA 2 M2(R) j Tr(A) = 0g: In other words, a b 2 W if and only if a + b = 0: c d Show that W is a subspace of M2(R). 1 (b) Show that the following three matrices are a basis for W : 1 0 0 1 0 0 ; ; : 0 −1 0 0 1 0 (c) Use (b) to compute dim W . (d) Use the Orthogonal Decomposition Theorem to compute dim W ?. ? (e) Let fe1; e2; : : : ; eng be a basis for a subspace U in an inner product space V . Show that v 2 U if and only if hv; eii = 0 for all i. (f) Compute W ?. Give a basis for W ?. Part (e) may be helpful in this problem. (g) Run the Gram-Schmidt process on the basis vectors (in the order they are presented) to obtain an orthonormal basis for W . (h) Consider the matrix 1 0 2 M ( ): −2 2 2 R Write the above matrix as A + A0 where A 2 W and A0 2 W ?. Is this decomposition unique? Solution 2. (a) First, note that the zero matrix is in W . Thus, W is non-empty. Let A; B 2 W . Then, Tr(A) = 0 = Tr(B). Since Trace is additive, we have that Tr(A + B) = Tr(A) + Tr(B) = 0: Thus, A + B 2 W . Similarly, since for any c 2 R, we know that Tr(cA) = cTr(A). Thus, for any A 2 W , we have that Tr(cA) = cTr(A) = c · 0 = 0: Thus, cA 2 W . Thus, W is non-empty, closed under addition, and closed under scalar multiplication. So, it is a subspace. (b) First, we show linear independence. Assume that for a; b; c 2 R, we have that 1 0 0 1 0 0 0 0 a + b + c = : 0 −1 0 0 1 0 0 0 Simplifying the right-hand side, we get that a b 0 0 = : c −a 0 0 Thus, equating components of the matrices, we get that a = b = c = 0. Thus, this collection is independent. For span, consider an arbitrary matrix in W . Since it is traceless, it must be of the form a b : c −a By the above, we can clearly write it as as a linear combination of the basis elements. Thus, this collection of 3 vectors is a basis for M2(R). (c) Since W has a basis of 3 elements, then dim(W ) = 3. 2 ? (d) By the Orthogonal Decomposition Theorem, M2(R) = W ⊕ W : Thus, ? ? ? 4 = dim(M2(R)) = dim(W ⊕ W ) = dim(W ) + dim(W ) = 3 + dim(W ): Thus, dim(W ?) = 1: ? (e) If v 2 U , then hv; ui = 0 for all u 2 U. Since ei 2 U, then hv; eii = 0 for all i. Conversely, let u 2 U. Then, since the ei's form a basis for U, then u = α1e1 + ··· + αnen for some αi 2 F. Thus, hv; ui = hv; α1ei + ··· + αneni = α1hv; e1i + ··· + αnhv; eni = 0: Since this is true for any u 2 U, then v 2 U ?. (f) Let a b A = 2 W ?: c d Then A is orthogonal to any element in W and, in particular, to the basis elements. Thus, a b 1 0 a −b Tr · = Tr = a − d = 0: c d 0 −1 c −d Thus, a = d. Similarly, dotting with the other basis elements, we get the two following equations: a b 0 1 0 a Tr · = Tr = c = 0; c d 0 0 0 c a b 0 0 b 0 Tr · = Tr = b = 0: c d 1 0 d 0 Thus, using the fact that a = d and c = b = 0, we get that every matrix in W ? is of the form a 0 : 0 a Thus, W ? is spanned by the identity matrix. (g) First, notice that this collection of vectors is already orthogonal. Thus, we only need to normalize our basis elements. Doing so, we obtain the following orthogonal basis: p 1= 2 0 0 1 0 0 p ; ; : 0 −1= 2 0 0 1 0 (h) Using the above orthonormal basis, we compute the projection to be 1 0 −1=2 0 P = : W −2 2 −2 1=2 Clearly, this matrix is traceless and thus in W . The orthogonal component is given by 1 0 −1=2 0 3=2 0 − = : −2 2 −2 1=2 0 3=2 Thus, we have the following decomposition for our matrix: 1 0 −1=2 0 3=2 0 = + : −2 2 −2 1=2 0 3=2 3 Question 3. Consider C([0; 1]), the space of continuous functions on [0; 1]. Several norms can be placed on 1 2 1 this vector space. Three of the most popular ones are the L ;L , and L norms denoted by jj · jj1; jj · jj2; and jj · jj1, respectively. They are given by Z 1 jjfjj1 = jf(x)j dx 0 s Z 1 2 jjfjj2 = [f(x)] dx 0 jjfjj1 = supfjf(x)j j x 2 [0; 1]g: (a) Explain why we can replace the definition of jj · jj1 to be jjfjj1 = maxfjf(x)j j x 2 [0; 1]g: (b) Use the Cauchy-Schwarz Inequality to prove that for any f 2 C([0; 1]), jjfjj1 ≤ jjfjj2: (c) Use properties of the integral to show that for any f 2 C([0; 1]), jjfjj1 ≤ jjfjj1: (d) As with any normed vector space, we can obtain a metric space from the norm in a canonical way. In particular, for the L1 and L1 norms, we get two different metrics on C([0; 1]) given respectively by d1(f; g) = jjf − gjj1 d1(f; g) = jjf − gjj1: Use (c) to show that if fn ! f in the d1 metric, then fn ! f in the d1 metric. Provide an "− N proof. n (e) Consider the sequence of functions given by fn(x) = x 2 C([0; 1]). Compute jjfnjj1 and use this to show that fn ! 0 in the d1 metric. n (f) For fn(x) = x , compute jjfnjj1. Use this to show that fn 6! 0 in the d1 metric. Use this to show that the converse to the statement in (d) is false. (g) Use (b) to provide a statement (similar to the one in (d)) relating the convergence of fn ! f in the metrics induced by the L1 and L2 norms. Solution 3. (a) Since f is a continuous function (and thus jfj is continuous) and [0; 1] is a compact set, f must attain a maximum on [0; 1]. This max will correspond to the supremum and thus we can replace \sup" with \max". (b) We will run the Cauchy-Schwarz inequality with the vector jfj and 1, the constant function. Doing so, we get Z 1 jf(x)j · 1 dx = jhjfj; 1ij ≤ jj jfj jj2 · jj1jj2 = jjfjj2 · 1 = jjfjj2: 0 1 (c) By definition of the L norm, we have that jf(x)j ≤ jjfjj1 for all x 2 [0; 1]: Thus, Z 1 Z 1 jjfjj1 = jf(x)j dx ≤ jjfjj1 dx = jjfjj1: 0 0 4 (d) Let " > 0. Since fn ! f in the d1 metric, there exists an N such that for ann n > N, jjfn − fjj1 < ": For this N, notice that for all n > N, by the above we have that jjfn − fjj1 ≤ jjfn − fjj1 < ": 1 Thus, fn ! f in the L metric. (e) Computing, we get that Z 1 Z 1 n 1 jjfnjj1 = jfn(x)j dx = x dx = : 0 0 n + 1 1 Thus, jjfn − 0jj1 = 1=(n + 1): Thus, fn ! 0 in the L metric.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • Estimations of the Trace of Powers of Positive Self-Adjoint Operators by Extrapolation of the Moments∗
    Electronic Transactions on Numerical Analysis. ETNA Volume 39, pp. 144-155, 2012. Kent State University Copyright 2012, Kent State University. http://etna.math.kent.edu ISSN 1068-9613. ESTIMATIONS OF THE TRACE OF POWERS OF POSITIVE SELF-ADJOINT OPERATORS BY EXTRAPOLATION OF THE MOMENTS∗ CLAUDE BREZINSKI†, PARASKEVI FIKA‡, AND MARILENA MITROULI‡ Abstract. Let A be a positive self-adjoint linear operator on a real separable Hilbert space H. Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the moments of A. Applications of the matrix case are discussed, and numerical results are given. Key words. Trace, positive self-adjoint linear operator, symmetric matrix, matrix powers, matrix moments, extrapolation. AMS subject classifications. 65F15, 65F30, 65B05, 65C05, 65J10, 15A18, 15A45. 1. Introduction. Let A be a positive self-adjoint linear operator from H to H, where H is a real separable Hilbert space with inner product denoted by (·, ·). Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the integer moments (z, Anz) of A, for n ∈ N. A similar procedure was first introduced in [3] for estimating the Euclidean norm of the error when solving a system of linear equations, which corresponds to q = −2. The case q = −1, which leads to estimates of the trace of the inverse of a matrix, was studied in [4]; on this problem, see [10]. Let us mention that, when only positive powers of A are used, the Hilbert space H could be infinite dimensional, while, for negative powers of A, it is always assumed to be a finite dimensional one, and, obviously, A is also assumed to be invertible.
    [Show full text]
  • 5 the Dirac Equation and Spinors
    5 The Dirac Equation and Spinors In this section we develop the appropriate wavefunctions for fundamental fermions and bosons. 5.1 Notation Review The three dimension differential operator is : ∂ ∂ ∂ = , , (5.1) ∂x ∂y ∂z We can generalise this to four dimensions ∂µ: 1 ∂ ∂ ∂ ∂ ∂ = , , , (5.2) µ c ∂t ∂x ∂y ∂z 5.2 The Schr¨odinger Equation First consider a classical non-relativistic particle of mass m in a potential U. The energy-momentum relationship is: p2 E = + U (5.3) 2m we can substitute the differential operators: ∂ Eˆ i pˆ i (5.4) → ∂t →− to obtain the non-relativistic Schr¨odinger Equation (with = 1): ∂ψ 1 i = 2 + U ψ (5.5) ∂t −2m For U = 0, the free particle solutions are: iEt ψ(x, t) e− ψ(x) (5.6) ∝ and the probability density ρ and current j are given by: 2 i ρ = ψ(x) j = ψ∗ ψ ψ ψ∗ (5.7) | | −2m − with conservation of probability giving the continuity equation: ∂ρ + j =0, (5.8) ∂t · Or in Covariant notation: µ µ ∂µj = 0 with j =(ρ,j) (5.9) The Schr¨odinger equation is 1st order in ∂/∂t but second order in ∂/∂x. However, as we are going to be dealing with relativistic particles, space and time should be treated equally. 25 5.3 The Klein-Gordon Equation For a relativistic particle the energy-momentum relationship is: p p = p pµ = E2 p 2 = m2 (5.10) · µ − | | Substituting the equation (5.4), leads to the relativistic Klein-Gordon equation: ∂2 + 2 ψ = m2ψ (5.11) −∂t2 The free particle solutions are plane waves: ip x i(Et p x) ψ e− · = e− − · (5.12) ∝ The Klein-Gordon equation successfully describes spin 0 particles in relativistic quan- tum field theory.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • 18.700 JORDAN NORMAL FORM NOTES These Are Some Supplementary Notes on How to Find the Jordan Normal Form of a Small Matrix. Firs
    18.700 JORDAN NORMAL FORM NOTES These are some supplementary notes on how to find the Jordan normal form of a small matrix. First we recall some of the facts from lecture, next we give the general algorithm for finding the Jordan normal form of a linear operator, and then we will see how this works for small matrices. 1. Facts Throughout we will work over the field C of complex numbers, but if you like you may replace this with any other algebraically closed field. Suppose that V is a C-vector space of dimension n and suppose that T : V → V is a C-linear operator. Then the characteristic polynomial of T factors into a product of linear terms, and the irreducible factorization has the form m1 m2 mr cT (X) = (X − λ1) (X − λ2) ... (X − λr) , (1) for some distinct numbers λ1, . , λr ∈ C and with each mi an integer m1 ≥ 1 such that m1 + ··· + mr = n. Recall that for each eigenvalue λi, the eigenspace Eλi is the kernel of T − λiIV . We generalized this by defining for each integer k = 1, 2,... the vector subspace k k E(X−λi) = ker(T − λiIV ) . (2) It is clear that we have inclusions 2 e Eλi = EX−λi ⊂ E(X−λi) ⊂ · · · ⊂ E(X−λi) ⊂ .... (3) k k+1 Since dim(V ) = n, it cannot happen that each dim(E(X−λi) ) < dim(E(X−λi) ), for each e e +1 k = 1, . , n. Therefore there is some least integer ei ≤ n such that E(X−λi) i = E(X−λi) i .
    [Show full text]
  • Trace Inequalities for Matrices
    Bull. Aust. Math. Soc. 87 (2013), 139–148 doi:10.1017/S0004972712000627 TRACE INEQUALITIES FOR MATRICES KHALID SHEBRAWI ˛ and HUSSIEN ALBADAWI (Received 23 February 2012; accepted 20 June 2012) Abstract Trace inequalities for sums and products of matrices are presented. Relations between the given inequalities and earlier results are discussed. Among other inequalities, it is shown that if A and B are positive semidefinite matrices then tr(AB)k ≤ minfkAkk tr Bk; kBkk tr Akg for any positive integer k. 2010 Mathematics subject classification: primary 15A18; secondary 15A42, 15A45. Keywords and phrases: trace inequalities, eigenvalues, singular values. 1. Introduction Let Mn(C) be the algebra of all n × n matrices over the complex number field. The singular values of A 2 Mn(C), denoted by s1(A); s2(A);:::; sn(A), are the eigenvalues ∗ 1=2 of the matrix jAj = (A A) arranged in such a way that s1(A) ≥ s2(A) ≥ · · · ≥ sn(A). 2 ∗ ∗ Note that si (A) = λi(A A) = λi(AA ), so for a positive semidefinite matrix A, we have si(A) = λi(A)(i = 1; 2;:::; n). The trace functional of A 2 Mn(C), denoted by tr A or tr(A), is defined to be the sum of the entries on the main diagonal of A and it is well known that the trace of a Pn matrix A is equal to the sum of its eigenvalues, that is, tr A = j=1 λ j(A). Two principal properties of the trace are that it is a linear functional and, for A; B 2 Mn(C), we have tr(AB) = tr(BA).
    [Show full text]
  • Introduction to Clifford's Geometric Algebra
    解説:特集 コンピューテーショナル・インテリジェンスの新展開 —クリフォード代数表現など高次元表現を中心として— Introduction to Clifford’s Geometric Algebra Eckhard HITZER* *Department of Applied Physics, University of Fukui, Fukui, Japan *E-mail: [email protected] Key Words:hypercomplex algebra, hypercomplex analysis, geometry, science, engineering. JL 0004/12/5104–0338 C 2012 SICE erty 15), 21) that any isometry4 from the vector space V Abstract into an inner-product algebra5 A over the field6 K can Geometric algebra was initiated by W.K. Clifford over be uniquely extended to an isometry7 from the Clifford 130 years ago. It unifies all branches of physics, and algebra Cl(V )intoA. The Clifford algebra Cl(V )is has found rich applications in robotics, signal process- the unique associative and multilinear algebra with this ing, ray tracing, virtual reality, computer vision, vec- property. Thus if we wish to generalize methods from tor field processing, tracking, geographic information algebra, analysis, calculus, differential geometry (etc.) systems and neural computing. This tutorial explains of real numbers, complex numbers, and quaternion al- the basics of geometric algebra, with concrete exam- gebra to vector spaces and multivector spaces (which ples of the plane, of 3D space, of spacetime, and the include additional elements representing 2D up to nD popular conformal model. Geometric algebras are ideal subspaces, i.e. plane elements up to hypervolume ele- to represent geometric transformations in the general ments), the study of Clifford algebras becomes unavoid- framework of Clifford groups (also called versor or Lip- able. Indeed repeatedly and independently a long list of schitz groups). Geometric (algebra based) calculus al- Clifford algebras, their subalgebras and in Clifford al- lows e.g., to optimize learning algorithms of Clifford gebras embedded algebras (like octonions 17))ofmany neurons, etc.
    [Show full text]
  • A Bit About Hilbert Spaces
    A Bit About Hilbert Spaces David Rosenberg New York University October 29, 2016 David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 1 / 9 Inner Product Space (or “Pre-Hilbert” Spaces) An inner product space (over reals) is a vector space V and an inner product, which is a mapping h·,·i : V × V ! R that has the following properties 8x,y,z 2 V and a,b 2 R: Symmetry: hx,yi = hy,xi Linearity: hax + by,zi = ahx,zi + b hy,zi Postive-definiteness: hx,xi > 0 and hx,xi = 0 () x = 0. David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 2 / 9 Norm from Inner Product For an inner product space, we define a norm as p kxk = hx,xi. Example Rd with standard Euclidean inner product is an inner product space: hx,yi := xT y 8x,y 2 Rd . Norm is p kxk = xT y. David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 3 / 9 What norms can we get from an inner product? Theorem (Parallelogram Law) A norm kvk can be generated by an inner product on V iff 8x,y 2 V 2kxk2 + 2kyk2 = kx + yk2 + kx - yk2, and if it can, the inner product is given by the polarization identity jjxjj2 + jjyjj2 - jjx - yjj2 hx,yi = . 2 Example d `1 norm on R is NOT generated by an inner product. [Exercise] d Is `2 norm on R generated by an inner product? David Rosenberg (New York University ) DS-GA 1003 October 29, 2016 4 / 9 Pythagorean Theroem Definition Two vectors are orthogonal if hx,yi = 0.
    [Show full text]
  • True False Questions from 4,5,6
    (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Math 217: True False Practice Professor Karen Smith 1. For every 2 × 2 matrix A, there exists a 2 × 2 matrix B such that det(A + B) 6= det A + det B. Solution note: False! Take A to be the zero matrix for a counterexample. 2. If the change of basis matrix SA!B = ~e4 ~e3 ~e2 ~e1 , then the elements of A are the same as the element of B, but in a different order. Solution note: True. The matrix tells us that the first element of A is the fourth element of B, the second element of basis A is the third element of B, the third element of basis A is the second element of B, and the the fourth element of basis A is the first element of B. 6 6 3. Every linear transformation R ! R is an isomorphism. Solution note: False. The zero map is a a counter example. 4. If matrix B is obtained from A by swapping two rows and det A < det B, then A is invertible. Solution note: True: we only need to know det A 6= 0. But if det A = 0, then det B = − det A = 0, so det A < det B does not hold. 5. If two n × n matrices have the same determinant, they are similar. Solution note: False: The only matrix similar to the zero matrix is itself. But many 1 0 matrices besides the zero matrix have determinant zero, such as : 0 0 6.
    [Show full text]
  • Lecture 28: Eigenvalues Allowing Complex Eigenvalues Is Really a Blessing
    Math 19b: Linear Algebra with Probability Oliver Knill, Spring 2011 cos(t) sin(t) 2 2 For a rotation A = the characteristic polynomial is λ − 2 cos(α)+1 − sin(t) cos(t) which has the roots cos(α) ± i sin(α)= eiα. Lecture 28: Eigenvalues Allowing complex eigenvalues is really a blessing. The structure is very simple: Fundamental theorem of algebra: For a n × n matrix A, the characteristic We have seen that det(A) = 0 if and only if A is invertible. polynomial has exactly n roots. There are therefore exactly n eigenvalues of A if we count them with multiplicity. The polynomial fA(λ) = det(A − λIn) is called the characteristic polynomial of 1 n n−1 A. Proof One only has to show a polynomial p(z)= z + an−1z + ··· + a1z + a0 always has a root z0 We can then factor out p(z)=(z − z0)g(z) where g(z) is a polynomial of degree (n − 1) and The eigenvalues of A are the roots of the characteristic polynomial. use induction in n. Assume now that in contrary the polynomial p has no root. Cauchy’s integral theorem then tells dz 2πi Proof. If Av = λv,then v is in the kernel of A − λIn. Consequently, A − λIn is not invertible and = =0 . (1) z =r | | | zp(z) p(0) det(A − λIn)=0 . On the other hand, for all r, 2 1 dz 1 2π 1 For the matrix A = , the characteristic polynomial is | | ≤ 2πrmax|z|=r = . (2) z =r 4 −1 | | | zp(z) |zp(z)| min|z|=rp(z) The right hand side goes to 0 for r →∞ because 2 − λ 1 2 det(A − λI2) = det( )= λ − λ − 6 .
    [Show full text]
  • The Determinant Inner Product and the Heisenberg Product of Sym(2)
    INTERNATIONAL ELECTRONIC JOURNAL OF GEOMETRY VOLUME 14 NO. 1 PAGE 1–12 (2021) DOI: HTTPS://DOI.ORG/10.32323/IEJGEO.602178 The determinant inner product and the Heisenberg product of Sym(2) Mircea Crasmareanu (Dedicated to the memory of Prof. Dr. Aurel BEJANCU (1946 - 2020)Cihan Ozgur) ABSTRACT The aim of this work is to introduce and study the nondegenerate inner product < ·; · >det induced by the determinant map on the space Sym(2) of symmetric 2 × 2 real matrices. This symmetric bilinear form of index 2 defines a rational symmetric function on the pairs of rays in the plane and an associated function on the 2-torus can be expressed with the usual Hopf bundle projection 3 2 1 S ! S ( 2 ). Also, the product < ·; · >det is treated with complex numbers by using the Hopf invariant map of Sym(2) and this complex approach yields a Heisenberg product on Sym(2). Moreover, the quadratic equation of critical points for a rational Morse function of height type generates a cosymplectic structure on Sym(2) with the unitary matrix as associated Reeb vector and with the Reeb 1-form being half of the trace map. Keywords: symmetric matrix; determinant; Hopf bundle; Hopf invariant AMS Subject Classification (2020): Primary: 15A15 ; Secondary: 15A24; 30C10; 22E47. 1. Introduction This short paper, whose nature is mainly expository, is dedicated to the memory of Professor Dr. Aurel Bejancu, who was equally a gifted mathematician and a dedicated professor. As we hereby present an in memoriam paper, we feel it would be better if we choose an informal presentation rather than a classical structured discourse, where theorems and propositions are followed by their proofs.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]