A Review of Linear Algebra, Vector Calculus, and Systems Theory

Total Page:16

File Type:pdf, Size:1020Kb

A Review of Linear Algebra, Vector Calculus, and Systems Theory A Review of Linear Algebra, Vector Calculus, and Systems Theory Throughout this book, methods and terminology from the area of mathematics known as linear algebra are used to facilitate analytical and numerical calculations. Linear algebra is concerned with objects that can be scaled and added together (i.e., vectors), the properties of sets of such objects (i.e., vector spaces), and special relationships between such sets (i.e., linear mappings expressed as matrices). This appendix begins by reviewing the most relevant results from linear algebra that are used elsewhere in the book. Section A.1 reviews the definition and properties of vectors, vector spaces, inner products, and norms. Section A.2 reviews matrices, matrix norms, traces, determinants, etc. Section A.3 reviews the eigenvalue–eigenvector problem. Section A.4 reviews matrix decompositions. The theory of matrix perturbations is reviewed in Section A.5, the matrix exponential is reviewed in Section A.6, and Kronecker product of matrices is reviewed in Section A.7. Whereas the emphasis in this appendix (and throughout the book) is on real vector spaces, Section A.8 discusses the complex case. Basic linear systems theory is reviewed in Section A.9. The concept of a product integral, which is important for defining Brownian motions in Lie groups, is covered in Section A.10. Building on linear-algebraic foundations, concepts from vector and matrix calculus are reviewed in Section A.11. Section A.12 presents exercises. A.1 Vectors The n-dimensional Euclidean space, Rn = R × R × ... × R (n times), can be viewed as the set of all “vectors” (i.e., column arrays consisting of n real numbers, xi ∈ R)ofthe form ⎡ ⎤ x1 ⎢ ⎥ . ⎢ x2 ⎥ x = ⎢ . ⎥ . ⎣ . ⎦ xn A very special vector is the zero vector, 0, which has entries that are all equal to the number zero. A.1.1 Vector Spaces When equipped with the operation of vector addition for any two vectors, x, y ∈ Rn, 316 A Review of Linear Algebra, Vector Calculus, and Systems Theory ⎡ ⎤ x1 + y1 ⎢ ⎥ . ⎢ x2 + y2 ⎥ x + y = ⎢ . ⎥ , ⎣ . ⎦ xn + yn and scalar multiplication by any c ∈ R, ⎡ ⎤ c · x1 ⎢ · ⎥ . ⎢ c x2 ⎥ c · x = ⎢ . ⎥ , ⎣ . ⎦ c · xn it can be shown that eight properties hold. Namely: x + y = y + x ∀ x, y ∈ Rn (A.1) (x + y)+z = x +(y + z) ∀ x, y, z ∈ Rn (A.2) x + 0 = x ∈ Rn (A.3) ∃−x ∈ Rn for each x ∈ Rn s.t. x +(−x)=0 (A.4) α · (x + y)=α · x + α · y ∀ α ∈ R and x, y ∈ Rn (A.5) (α + β) · x = α · x + β · x ∀ α, β ∈ R and x ∈ Rn (A.6) (α · β) · x = α · (β · x) ∀ α, β ∈ R and x ∈ Rn (A.7) 1 · x = x ∀ x ∈ Rn. (A.8) Here the symbol ∃ means “there exists” and ∀ means “for all.” The “·” in the above equations denotes scalar–scalar and scalar–vector multiplication. The above properties each have names: (A.1) and (A.2) are respectively called com- mutativity and associativity of vector addition; (A.3) and (A.4) are respectively called the existence of an additive identity element and an additive inverse element for each element; (A.5), (A.6), and (A.7) are three different kinds of distributive laws; and (A.8) refers to the existence of a scalar that serves as a multiplicative identity. These properties make (Rn, +, ·)areal vector space. Moreover, any abstract set, X, that is closed under the operations of vector addition and scalar multiplication and satisfies the above eight properties is a real vector space (X, +, ·). If the field1 over which properties (A.5)–(A.7) hold is extended to include complex numbers, then the result is a complex vector space. It is often convenient to decompose an arbitrary vector x ∈ Rn into a weighted sum of the form n x = x1e1 + x2e2 + ...+ xnen = xiei. i=1 Here the scalar–vector multiplication, ·, is implied. That is, xiei = xi · ei. It is often convenient to drop the dot, because the scalar product of two vectors (which will be defined shortly) is also denoted with a dot. In order to avoid confusion, the dot in the scalar–vector multiplication is henceforth suppressed. Here the natural basis vectors ei are 1This can be thought of as the real or complex numbers. More generally a field is an algebraic structure that is closed under addition, subtraction, multiplication, and division. For example, the rational numbers form a field. A.1 Vectors 317 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 1 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . ⎢ 0 ⎥ . ⎢ 1 ⎥ . ⎢ 0 ⎥ e1 = ⎢ . ⎥ ; e2 = ⎢ . ⎥ ; ... en = ⎢ . ⎥ . ⎣ . ⎦ ⎣ . ⎦ ⎣ . ⎦ 0 0 1 This (or any) basis is said to span the whole vector space. And “the span” of this basis is Rn. A.1.2 Linear Mappings and Isomorphisms In some situations, it will be convenient to take a more general perspective. For exam- ple, when considering the tangent planes at two different points on a two-dimensional surface in three-dimensional Euclidean space, these two planes are not the same plane since they sit in space in two different ways, but they nonetheless have much in com- mon. It is clear that scalar multiplies of vectors in either one of these planes can be added together and the result will remain within that plane, and all of the other rules in (A.1)–(A.8) will also follow. By attaching an origin and coordinate system to each plane at the point where it meets the surface, all vectors tangent to the surface at that point form a vector space. If two of these planes are labeled as V1 and V2, it is clear that each one is “like” R2. In addition, given vectors in coordinate systems in either plane, it is possible to describe those two-dimensional vectors as three-dimensional vectors in the ambient three-dimensional space, E = R3, in which the surfaces sit. Both trans- formations between planes and from a plane into three-dimensional space are examples of linear transformations between two vector spaces of the form f : V → U, which are defined by the property f(av1 + bv2)=a f(v1)+b f(v2) (A.9) for all a, b ∈ R (or C if V is complex), and vi ∈ V . Most linear transformations f : V1 → V2 that will be encountered in this book will be of the form f(x)=Ax where the dimensions of the matrix A are dim(V2) × dim(V1). (Matrices, as well as matrix–vector multiplication, are defined in Section A.2.) The concept of two planes being equivalent is made more precise by defining the more general concept of a vector-space isomorphism. Specifically, two vector spaces, V1 and V2, are isomorphic if there exists an invertible linear transformation between them. And this is reflected in the matrix A being invertible. (More about matrices will follow.) ∼ When two vector spaces are isomorphic, the notation V1 = V2 is used. A.1.3 The Scalar Product and Vector Norm The scalar product (also called the inner product) of two vectors x, y ∈ Rn is defined as n . x · y = x1y1 + x2y2 + ...xnyn = xiyi. i=1 Sometimes it is more convenient to write x · y as (x , y). The comma in this notation is critical. With this operation, it becomes clear that xi = x · ei. Note that the inner product is linear in each argument. For example, 318 A Review of Linear Algebra, Vector Calculus, and Systems Theory x · (α1y1 + α2y2)=α1(x · y1)+α2(x · y2). Linearity in the first argument follows from the fact that the inner product is symmetric: x · y = y · x. The vector space Rn together with the inner-product operation is called an inner- product space. The norm of a vector can be defined using the inner product as . x = (x, x). (A.10) If x = 0, this will always be a positive quantity, and for any c ∈ R, cx = |c|x. (A.11) The triangle inequality states that x + y≤x + y. (A.12) This is exactly a statement (in vector form) of the ancient fact that the sum of the lengths of any two sides of a triangle can be no less than the length of the third side. Furthermore, the well-known Cauchy–Schwarz inequality states that (x, y) ≤x·y. (A.13) This is used extensively throughout the rest of the book, and it is important to know where it comes from. The proof of the Cauchy–Schwarz inequality is actually quite straightforward. To start, define f(t)as f(t)=(x + ty, x + ty)=x + ty2 ≥ 0. Expanding out the inner product results in a quadratic equation in t: f(t)=(x, x)+2(x, y)t +(y, y)t2 ≥ 0. Since the minimum of f(t) occurs when f (t) = 0 (i.e., when t = −(x, y)/(y, y)), the minimal value of f(t)is f(−(x, y)/(y, y))=(x, x) − (x, y)2/(y, y) when y = 0. Since f(t) ≥ 0 for all values of t, the Cauchy–Schwarz inequality follows. In the case when y = 0, the Cauchy–Schwarz inequality reduces to the equality 0 = 0. Alternatively, the Cauchy–Schwarz inequality is obtained for vectors in Rn from Lagrange’s equality [2]: & '& ' & ' n n n 2 n −1 n 2 2 − − 2 ak bk akbk = (aibj ajbi) (A.14) k=1 k=1 k=1 i=1 j=i+1 by observing that the right-hand side of the equality is always non-negative. Lagrange’s equality can be proved by induction. The norm in (A.10) is often called the “2-norm” to distinguish it from the more general vector “p-norm” & ' 1 n p . p xp = |xi| i=1 A.1 Vectors 319 for 1 ≤ p ≤∞, which also satisfies (A.11) and (A.12).
Recommended publications
  • Mathematics for Dynamicists - Lecture Notes Thomas Michelitsch
    Mathematics for Dynamicists - Lecture Notes Thomas Michelitsch To cite this version: Thomas Michelitsch. Mathematics for Dynamicists - Lecture Notes. Doctoral. Mathematics for Dynamicists, France. 2006. cel-01128826 HAL Id: cel-01128826 https://hal.archives-ouvertes.fr/cel-01128826 Submitted on 10 Mar 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Mathematics for Dynamicists Lecture Notes c 2004-2006 Thomas Michael Michelitsch1 Department of Civil and Structural Engineering The University of Sheffield Sheffield, UK 1Present address : http://www.dalembert.upmc.fr/home/michelitsch/ . Dedicated to the memory of my parents Michael and Hildegard Michelitsch Handout Manuscript with Fractal Cover Picture by Michael Michelitsch https://www.flickr.com/photos/michelitsch/sets/72157601230117490 1 Contents Preface 1 Part I 1.1 Binomial Theorem 1.2 Derivatives, Integrals and Taylor Series 1.3 Elementary Functions 1.4 Complex Numbers: Cartesian and Polar Representation 1.5 Sequences, Series, Integrals and Power-Series 1.6 Polynomials and Partial Fractions
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Partitioned (Or Block) Matrices This Version: 29 Nov 2018
    Partitioned (or Block) Matrices This version: 29 Nov 2018 Intermediate Econometrics / Forecasting Class Notes Instructor: Anthony Tay It is frequently convenient to partition matrices into smaller sub-matrices. e.g. 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 A B (2×2) (2×3) 3 1 1 0 0 = 3 1 1 0 0 = C I 1 3 0 1 0 1 3 0 1 0 (3×2) (3×3) 2 0 0 0 1 2 0 0 0 1 The same matrix can be partitioned in several different ways. For instance, we can write the previous matrix as 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 a b0 (1×1) (1×4) 3 1 1 0 0 = 3 1 1 0 0 = c D 1 3 0 1 0 1 3 0 1 0 (4×1) (4×4) 2 0 0 0 1 2 0 0 0 1 One reason partitioning is useful is that we can do matrix addition and multiplication with blocks, as though the blocks are elements, as long as the blocks are conformable for the operations. For instance: A B D E A + D B + E (2×2) (2×3) (2×2) (2×3) (2×2) (2×3) + = C I C F 2C I + F (3×2) (3×3) (3×2) (3×3) (3×2) (3×3) A B d E Ad + BF AE + BG (2×2) (2×3) (2×1) (2×3) (2×1) (2×3) = C I F G Cd + F CE + G (3×2) (3×3) (3×1) (3×3) (3×1) (3×3) | {z } | {z } | {z } (5×5) (5×4) (5×4) 1 Intermediate Econometrics / Forecasting 2 Examples (1) Let 1 2 1 1 2 1 c 1 4 2 3 4 2 3 h i A = = = a a a and c = c 1 2 3 2 3 0 1 3 0 1 c 0 1 3 0 1 3 3 c1 h i then Ac = a1 a2 a3 c2 = c1a1 + c2a2 + c3a3 c3 The product Ac produces a linear combination of the columns of A.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • Matrix Determinants
    MATRIX DETERMINANTS Summary Uses ................................................................................................................................................. 1 1‐ Reminder ‐ Definition and components of a matrix ................................................................ 1 2‐ The matrix determinant .......................................................................................................... 2 3‐ Calculation of the determinant for a matrix ................................................................. 2 4‐ Exercise .................................................................................................................................... 3 5‐ Definition of a minor ............................................................................................................... 3 6‐ Definition of a cofactor ............................................................................................................ 4 7‐ Cofactor expansion – a method to calculate the determinant ............................................... 4 8‐ Calculate the determinant for a matrix ........................................................................ 5 9‐ Alternative method to calculate determinants ....................................................................... 6 10‐ Exercise .................................................................................................................................... 7 11‐ Determinants of square matrices of dimensions 4x4 and greater ........................................
    [Show full text]
  • Matrix Calculus
    Appendix D Matrix Calculus From too much study, and from extreme passion, cometh madnesse. Isaac Newton [205, §5] − D.1 Gradient, Directional derivative, Taylor series D.1.1 Gradients Gradient of a differentiable real function f(x) : RK R with respect to its vector argument is defined uniquely in terms of partial derivatives→ ∂f(x) ∂x1 ∂f(x) , ∂x2 RK f(x) . (2053) ∇ . ∈ . ∂f(x) ∂xK while the second-order gradient of the twice differentiable real function with respect to its vector argument is traditionally called the Hessian; 2 2 2 ∂ f(x) ∂ f(x) ∂ f(x) 2 ∂x1 ∂x1∂x2 ··· ∂x1∂xK 2 2 2 ∂ f(x) ∂ f(x) ∂ f(x) 2 2 K f(x) , ∂x2∂x1 ∂x2 ··· ∂x2∂xK S (2054) ∇ . ∈ . .. 2 2 2 ∂ f(x) ∂ f(x) ∂ f(x) 2 ∂xK ∂x1 ∂xK ∂x2 ∂x ··· K interpreted ∂f(x) ∂f(x) 2 ∂ ∂ 2 ∂ f(x) ∂x1 ∂x2 ∂ f(x) = = = (2055) ∂x1∂x2 ³∂x2 ´ ³∂x1 ´ ∂x2∂x1 Dattorro, Convex Optimization Euclidean Distance Geometry, Mεβoo, 2005, v2020.02.29. 599 600 APPENDIX D. MATRIX CALCULUS The gradient of vector-valued function v(x) : R RN on real domain is a row vector → v(x) , ∂v1(x) ∂v2(x) ∂vN (x) RN (2056) ∇ ∂x ∂x ··· ∂x ∈ h i while the second-order gradient is 2 2 2 2 , ∂ v1(x) ∂ v2(x) ∂ vN (x) RN v(x) 2 2 2 (2057) ∇ ∂x ∂x ··· ∂x ∈ h i Gradient of vector-valued function h(x) : RK RN on vector domain is → ∂h1(x) ∂h2(x) ∂hN (x) ∂x1 ∂x1 ··· ∂x1 ∂h1(x) ∂h2(x) ∂hN (x) h(x) , ∂x2 ∂x2 ··· ∂x2 ∇ .
    [Show full text]
  • Matrix Calculus
    D Matrix Calculus D–1 Appendix D: MATRIX CALCULUS D–2 In this Appendix we collect some useful formulas of matrix calculus that often appear in finite element derivations. §D.1 THE DERIVATIVES OF VECTOR FUNCTIONS Let x and y be vectors of orders n and m respectively: x1 y1 x2 y2 x = . , y = . ,(D.1) . xn ym where each component yi may be a function of all the xj , a fact represented by saying that y is a function of x,or y = y(x). (D.2) If n = 1, x reduces to a scalar, which we call x.Ifm = 1, y reduces to a scalar, which we call y. Various applications are studied in the following subsections. §D.1.1 Derivative of Vector with Respect to Vector The derivative of the vector y with respect to vector x is the n × m matrix ∂y1 ∂y2 ∂ym ∂ ∂ ··· ∂ x1 x1 x1 ∂ ∂ ∂ ∂ y1 y2 ··· ym y def ∂x ∂x ∂x = 2 2 2 (D.3) ∂x . . .. ∂ ∂ ∂ y1 y2 ··· ym ∂xn ∂xn ∂xn §D.1.2 Derivative of a Scalar with Respect to Vector If y is a scalar, ∂y ∂x1 ∂ ∂ y y def ∂ = x2 .(D.4) ∂x . . ∂y ∂xn §D.1.3 Derivative of Vector with Respect to Scalar If x is a scalar, ∂y def ∂ ∂ ∂ = y1 y2 ... ym (D.5) ∂x ∂x ∂x ∂x D–2 D–3 §D.1 THE DERIVATIVES OF VECTOR FUNCTIONS REMARK D.1 Many authors, notably in statistics and economics, define the derivatives as the transposes of those given above.1 This has the advantage of better agreement of matrix products with composition schemes such as the chain rule.
    [Show full text]
  • 6 Inner Product Spaces
    Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space quite a special type of a vector space. Essentially all these are consequences of the dot product. Thus, it makes sense to look for operations which share the basic properties of the dot product. In this section we shall briefly discuss this. Definition 6.1 Let V be a vector space. By an inner product on V we mean a binary operation, which associates a scalar say u, v for each pair of vectors u, v) in V, satisfying h i the following properties for all u, v, w in V and α, β any scalar. (Let “ ” denote the complex − conjugate of a complex number.) (1) u, v = v, u (Hermitian property or conjugate symmetry); h i h i (2) u, αv + βw = α u, v + β u, w (sesquilinearity); h i h i h i (3) v, v > 0 if v =0 (positivity). h i 6 A vector space with an inner product is called an inner product space. Remark 6.1 (i) Observe that we have not mentioned whether V is a real vector space or a complex vector space. The above definition includes both the cases. The only difference is that if K = R then the conjugation is just the identity. Thus for real vector spaces, (1) will becomes ‘symmetric property’ since for a real number c, we have c¯ = c. (ii) Combining (1) and (2) we obtain (2’) αu + βv, w =α ¯ u, w + β¯ v, w .
    [Show full text]
  • 9. Properties of Matrices Block Matrices
    9. Properties of Matrices Block Matrices It is often convenient to partition a matrix M into smaller matrices called blocks, like so: 01 2 3 11 ! B C B4 5 6 0C A B M = B C = @7 8 9 1A C D 0 1 2 0 01 2 31 011 B C B C Here A = @4 5 6A, B = @0A, C = 0 1 2 , D = (0). 7 8 9 1 • The blocks of a block matrix must fit together to form a rectangle. So ! ! B A C B makes sense, but does not. D C D A • There are many ways to cut up an n × n matrix into blocks. Often context or the entries of the matrix will suggest a useful way to divide the matrix into blocks. For example, if there are large blocks of zeros in a matrix, or blocks that look like an identity matrix, it can be useful to partition the matrix accordingly. • Matrix operations on block matrices can be carried out by treating the blocks as matrix entries. In the example above, ! ! A B A B M 2 = C D C D ! A2 + BC AB + BD = CA + DC CB + D2 1 Computing the individual blocks, we get: 0 30 37 44 1 2 B C A + BC = @ 66 81 96 A 102 127 152 0 4 1 B C AB + BD = @10A 16 0181 B C CA + DC = @21A 24 CB + D2 = (2) Assembling these pieces into a block matrix gives: 0 30 37 44 4 1 B C B 66 81 96 10C B C @102 127 152 16A 4 10 16 2 This is exactly M 2.
    [Show full text]
  • Notes on Change of Bases Northwestern University, Summer 2014
    Notes on Change of Bases Northwestern University, Summer 2014 Let V be a finite-dimensional vector space over a field F, and let T be a linear operator on V . Given a basis (v1; : : : ; vn) of V , we've seen how we can define a matrix which encodes all the information about T as follows. For each i, we can write T vi = a1iv1 + ··· + anivn 2 for a unique choice of scalars a1i; : : : ; ani 2 F. In total, we then have n scalars aij which we put into an n × n matrix called the matrix of T relative to (v1; : : : ; vn): 0 1 a11 ··· a1n B . .. C M(T )v := @ . A 2 Mn;n(F): an1 ··· ann In the notation M(T )v, the v showing up in the subscript emphasizes that we're taking this matrix relative to the specific bases consisting of v's. Given any vector u 2 V , we can also write u = b1v1 + ··· + bnvn for a unique choice of scalars b1; : : : ; bn 2 F, and we define the coordinate vector of u relative to (v1; : : : ; vn) as 0 1 b1 B . C n M(u)v := @ . A 2 F : bn In particular, the columns of M(T )v are the coordinates vectors of the T vi. Then the point of the matrix M(T )v is that the coordinate vector of T u is given by M(T u)v = M(T )vM(u)v; so that from the matrix of T and the coordinate vectors of elements of V , we can in fact reconstruct T itself.
    [Show full text]