Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA

Total Page:16

File Type:pdf, Size:1020Kb

Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. I - Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA MATRICES, VECTORS, DETERMINANTS, AND LINEAR ALGEBRA Tadao ODA Tohoku University, Japan Keywords: matrix, determinant, linear equation, Cramer’s rule, eigenvalue, Jordan canonical form, symmetric matrix, vector space, linear map Contents 1. Matrices, Vectors and their Basic Operations 1.1. Matrices 1.2. Vectors 1.3. Addition and Scalar Multiplication of Matrices 1.4. Multiplication of Matrices 2. Determinants 2.1. Square Matrices 2.2. Determinants 2.3. Cofactors and the Inverse Matrix 3. Systems of Linear Equations 3.1. Linear Equations 3.2. Cramer’s Rule 3.3. Eigenvalues of a Complex Square Matrix 3.4. Jordan Canonical Form 4. Symmetric Matrices and Quadratic Forms 4.1. Real Symmetric Matrices and Orthogonal Matrices 4.2. Hermitian Symmetric Matrices and Unitary Matrices 5. Vector Spaces and Linear Algebra 5.1. Vector spaces 5.2. Subspaces 5.3. Direct Sum of Vector Spaces 5.4. Linear Maps 5.5. Change of Bases 5.6. Properties of Linear Maps 5.7. A SystemUNESCO of Linear Equations Revisited – EOLSS 5.8. Quotient Vector Spaces 5.9. Dual Spaces 5.10. Tensor ProductSAMPLE of Vector Spaces CHAPTERS 5.11. Symmetric Product of a Vector Space 5.12. Exterior Product of a Vector Space Glossary Bibliography Biographical Sketch Summary A down-to-earth introduction of matrices and their basic operations will be followed by ©Encyclopedia of Life Support Systems (EOLSS) MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. I - Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA basic results on determinants, systems of linear equations, eigenvalues, real symmetric matrices and complex Hermitian symmetric matrices. Abstract vector spaces and linear maps will then be introduced. The power and merit of seemingly useless abstraction will make earlier results on matrices more transparent and easily understandable. Matrices and linear algebra play important roles in applications. Unfortunately, however, space limitation prevents description of algorithmic and computational aspects of linear algebra indispensable to applications. The readers are referred to the references listed at the end. 1. Matrices, Vectors and their Basic Operations 1.1. Matrices A matrix is a rectangular array ⎛⎞ ⎜⎟aa1112 "" a 1jn a 1 ⎜⎟ ⎜⎟ ⎜⎟aa21 22 "" a22jn a ⎜⎟ ⎜⎟##"#"# ⎜⎟ ⎜⎟ ⎜⎟aa"" a a ⎜⎟iijin1 i2 ⎜⎟ ⎜⎟##"#"# ⎜⎟ ⎜⎟ ⎜⎟aa"" a a ⎝⎠mmjmn1 m2 of entries a…a11,,mn , which are numbers or symbols. Very often, such a matrix will be denoted by a single letter such as A , thus ⎛⎞ ⎜⎟aa1112 "" a 1jn a 1 ⎜⎟ ⎜⎟ ⎜⎟aa21 22 "" a22jn a ⎜⎟ ⎜⎟##"#"# ⎜⎟ A :=⎜⎟ . ⎜⎟aa"" a a ⎜⎟iijin1 i2 ⎜⎟ ⎜⎟##"#"# ⎜⎟ ⎜⎟ ⎜⎟aa"" a a ⎝⎠mmjmn1UNESCOm2 – EOLSS The notation A = ()aij is used also, for short. In this notation, the first index i is called the row index, whileSAMPLE the second index j is called CHAPTERS the column index. Each of the horizontal arrays is called a row, thus ()()()()a11, a 12 ,, …a 1j ,, …a 1 n , a 21 , a 22 ,, …a 2 j ,, …a 2 n ,, … a i 1 , a i 2 ,, …a ij ,, …a in ,, … a m 1 , a m 2 ,, …a mj ,, …a mn are called the first row, second row,…, i -th row,…, m -th row, respectively. On the other hand, each of the vertical arrays is called a column, thus ©Encyclopedia of Life Support Systems (EOLSS) MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. I - Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA ⎛⎞⎛⎞⎛⎞ ⎛⎞ ⎜⎟⎜⎟aa11 12⎜⎟a1 j ⎜⎟ a 1n ⎜⎟⎜⎟⎜⎟ ⎜⎟ ⎜⎟⎜⎟⎜⎟ ⎜⎟ ⎜⎟⎜⎟aa21 22⎜⎟a2 j ⎜⎟ a 2n ⎜⎟⎜⎟⎜⎟ ⎜⎟ ⎜⎟⎜⎟##⎜⎟# ⎜⎟ # ⎜⎟⎜⎟⎜⎟ ⎜⎟ ⎜⎟⎜⎟,,,,,……⎜⎟ ⎜⎟ ⎜⎟⎜⎟aa⎜⎟a ⎜⎟ a ⎜⎟⎜⎟ii12⎜⎟ij ⎜⎟ in ⎜⎟⎜⎟⎜⎟ ⎜⎟ ⎜⎟⎜⎟##⎜⎟# ⎜⎟ # ⎜⎟⎜⎟⎜⎟ ⎜⎟ ⎜⎟⎜⎟⎜⎟ ⎜⎟ ⎜⎟⎜⎟⎜⎟a ⎜⎟ ⎝⎠⎝⎠aamm12⎝⎠mj ⎝⎠ a mn are called the first column, second column,…, j -th column, …, n -th column, respectively. Such an A is called a matrix with m rows and n columns, an ()mn, - matrix, or an mn× matrix. An ()mn, -matrix with all the entries 0 is called the zero matrix and written simply as 0 , thus ⎛⎞00" ⎜⎟ 0 :=⎜⎟#%# . ⎜⎟ ⎝⎠00" 1.2. Vectors A matrix with only one row, or only one column is called a vector, thus ()a12,,,,, a …ajn …a is a row vector, while ⎛⎞ ⎜⎟b1 ⎜⎟ ⎜⎟ ⎜⎟b2 ⎜⎟ ⎜⎟# ⎜⎟ ⎜⎟ ⎜⎟b ⎜⎟i ⎜⎟ ⎜⎟# ⎜⎟ ⎜⎟ ⎜⎟b UNESCO – EOLSS ⎝⎠m is a column vectorSAMPLE. CHAPTERS The rows and columns of an ()mn, -matrix A above are thus called, the first row vector, second row vector,…, i -th row vector,…, m -th row vector, and the first column vector, second column vector,…, j -th column vector, …, n -th column vector. A (1, 1) -matrix, i.e., a number or a symbol, is called a scalar. 1.3. Addition and Scalar Multiplication of Matrices ©Encyclopedia of Life Support Systems (EOLSS) MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. I - Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA The addition of two (mn, ) -matrices A = (aij ) and B = (bij ) are defined by ⎛ ⎞ ⎜ ab11++ 11 ab12 12 "" ab 1jj + 1 ab 1 nn + 1 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ab21++ 21 ab 22 22 "" a22jj + b a 22 nn + b⎟ ⎜ ⎟ ⎜ ##"#"#⎟ ⎜ ⎟ AB+:=+()abij ij =⎜ ⎟ ⎜ ab++ ab"" ab + ab +⎟ ⎜ i11 iii22 ij ij in in ⎟ ⎜ ⎟ ⎜ ##"#"#⎟ ⎜ ⎟ ⎜ ⎟ ⎜ abab++"" ab + ab +⎟ ⎝ m11 mmm22 mj mj mn mn ⎠ when the addition of the entries makes sense. The multiplication of a scalar c with an ()mn, -matrix A = (aij ) is defined by ⎛⎞ ⎜⎟ca11 ca12 "" ca 1jn ca 1 ⎜⎟ ⎜⎟ ⎜⎟ca21 ca 22 "" ca22jn ca ⎜⎟ ⎜⎟##"#"# ⎜⎟ ccaA :=()ij = ⎜⎟ ⎜⎟ca ca"" ca ca ⎜⎟iijin1 i2 ⎜⎟ ⎜⎟##"#"# ⎜⎟ ⎜⎟ ⎜⎟ca ca"" ca ca ⎝⎠mmjmn1 m2 when the multiplication of a scalar with the entries makes sense. 1.4. Multiplication of Matrices What makes matrices most interesting and powerful is the multiplication, which does wonders as explained below. Suppose that the entries appearing in our matrices are numbers which admit multiplication. Then the multiplication AB of two matrices A and B is defined when the number of columns of A is the same as the number of rows of B . Let A = (aij ) be an (lm, ) -matrix and B = (bjk ) an (mn, ) -matrix. Then their product is the (ln, )UNESCO-matrix defined by – EOLSS SAMPLEm CHAPTERS AB :=()ccabik , with ik :=∑ ij jk , j=1 or more concretely, ©Encyclopedia of Life Support Systems (EOLSS) MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. I - Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA ⎛ ⎞ ⎜ ab11 11++""""" a 1m b m 1 ab 11 1 k ++ a 1 m b mk ab 11 1 n ++ a 1 m b mn ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ #"#"#⎟ ⎜ ⎟ AB ⎜ ab""""" a b ab a b ab a b ⎟ =++⎜ i111 imm 1 ik 11 ++ immk in 11 ++. immn⎟ ⎜ ⎟ ⎜ #"#"#⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ abl111++""""" a lmm b 1 ab lk 11 ++ a lmmk b ab ln 11 ++ a lmmn b ⎠ Of particular interest is the product Av of an ()mn, -matrix A = (aij ) with a column vector v of size n , which is the column vector of size m defined by ⎛⎞⎛⎞⎛ ⎞ ⎜⎟aaa11"" 1jn 1 ⎜⎟⎜vavav11111++" nn ⎟ ⎜⎟⎜⎟⎜ ⎟ ⎜⎟⎜⎟⎜ ⎟ ⎜⎟#"#"#⎜⎟⎜## ⎟ ⎜⎟⎜⎟⎜ ⎟ ⎜⎟ aaa""⎜⎟⎜vavav" ⎟ ⎜⎟iijin1 ⎜⎟⎜ji=++,11 inn ⎟ ⎜⎟⎜⎟⎜ ⎟ ⎜⎟#"#"#⎜⎟⎜## ⎟ ⎜⎟⎜⎟⎜ ⎟ ⎜⎟⎜⎟⎜ ⎟ ⎜⎟aaa""⎜⎟⎜ ⎟ ⎝⎠mmjmn1 ⎝⎠⎝vavavnm11++" mnn ⎠ as well as the product uA of a row vector uu…u= ()1,,m of size m with A , which is the row vector of size n defined by ⎛⎞ ⎜⎟aaa11"" 1jn 1 ⎜⎟ ⎜⎟ ⎜⎟#"#"# ⎜⎟ ⎜⎟aaa"" ()u…u…u1,,,,im⎜⎟iijin1 ⎜⎟ ⎜⎟#"#"# ⎜⎟ ⎜⎟ ⎜⎟aaa"" ⎝⎠mmjmn1 =++,,++,,++.()ua111""" umm a 1 …ua 11 j u mmj a …ua 11 n u mmn a T The transpose A of an (mn, ) -matrix A = (aij ) is the (nm, ) -matrix defined by T A :=()aa′′ji , with ji :=aij , UNESCO – EOLSS or more concretely, SAMPLE CHAPTERS T ⎛⎞⎛ ⎞ ⎜⎟aa1112 "" a 1jn a 1 ⎜ aa1121 ⋅⋅ aim 1 ⋅⋅ a 1 ⎟ ⎜⎟⎜ ⎟ ⎜⎟⎜ ⎟ ⎜⎟aa21 22 "" a22jn a ⎜ aa12 22 ⋅⋅ aim22 ⋅⋅ a⎟ ⎜⎟⎜ ⎟ ⎜⎟##"#"# ⎜ ## # #⎟ ⎜⎟⎜ ⋅⋅ ⋅⋅ ⎟ ⎜⎟:= ⎜ ⎟ . ⎜⎟aa"" a a ⎜ aa⋅⋅ a ⋅⋅ a⎟ ⎜⎟iijin1 i2 ⎜ 1 jijmj2 j ⎟ ⎜⎟⎜ ⎟ ⎜⎟##"#"# ⎜ ##⋅⋅ # ⋅⋅ #⎟ ⎜⎟⎜ ⎟ ⎜⎟⎜⎟ ⎜⎟aa"" a a ⎜⎟ ⎝⎠mmjmn1 m2 ⎝ aa1ninmn2n ⋅⋅ a ⋅⋅ a⎠ ©Encyclopedia of Life Support Systems (EOLSS) MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. I - Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA For an ()lm, -matrix A and an ()mn, -matrix B , it is easy to see that ()ABTTT=, B A when the multiplication of the numbers concerned is commutative. When A and B are (nn, ) -matrices, both products AB and BA make sense, but they need not be the same in general. 2. Determinants 2.1. Square Matrices Square matrices, namely matrices with the same number of rows and columns, are most interesting. Special among them is the identity matrix of size n , denoted by I or In and defined by ⎛⎞10" ⎜⎟ II=:=#%# =()δ , nij⎜⎟ ⎜⎟ ⎝⎠01" where δij is known as Kronecker’s delta defined by ⎧1 ij= δij :=⎨ . ⎩0 ij≠ For an arbitrary (mn, ) -matrix A , the following clearly holds: IAmn== Aand AI A. The matrix cI with the same entry c along the diagonal and 0 elsewhere is called a scalar matrix. UNESCO – EOLSS More generally, a square matrix D = (dij ) of size n is called a diagonal matrix if dij = 0 for ij≠ SAMPLE, that is, CHAPTERS ⎛⎞ ⎜⎟d11 00" ⎜⎟ ⎜⎟ ⎜⎟00d22 " D =.⎜⎟ ⎜⎟#%# ⎜⎟ ⎜⎟ ⎜⎟ ⎝⎠00" dnn 2.2. Determinants ©Encyclopedia of Life Support Systems (EOLSS) MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. I - Matrices, Vectors, Determinants, and Linear Algebra - Tadao ODA Let A = (aij ) be a square matrix of size n (also said to be of order n ), that is, an (nn, )- matrix or an nn× matrix. When the entries aij are numbers (rational numbers, real numbers, complex numbers, or more generally elements of a commutative ring to be introduced in Rings and Modules), for which addition, subtraction and commutative multiplication are possible, associated to A is a number called the determinant of A and denoted by ||A or by det(A ) . When n =1 or n = 2 , the determinant is defined to be aa11 12 aa11:= 11 , := aaaa 11 22 − 12 21 . aa21 22 For n = 3, the formula is a bit more complicated. aaa11 12 13 aaa21 22 23 :=++−−−. aaaaaaaaaaaaaaaaaa11 22 33 12 23 31 13 21 32 13 22 31 11 23 32 12 21 33 aaa31 32 33 The determinant of A = ()aij for general n is defined as follows: aa11" 1n #%#:=∑sgn(σ )aa1(1)2(2)σσ " aii σ () " a nn σ () , σ aannn1 " where σ runs through the permutations of the indices {1, 2,,,……n }, and sgn(σ ) is the signature of σ to be defined elsewhere in Groups and Applications, since it is not so practical to compute the determinant using this formula.
Recommended publications
  • On the Bicoset of a Bivector Space
    International J.Math. Combin. Vol.4 (2009), 01-08 On the Bicoset of a Bivector Space Agboola A.A.A.† and Akinola L.S.‡ † Department of Mathematics, University of Agriculture, Abeokuta, Nigeria ‡ Department of Mathematics and computer Science, Fountain University, Osogbo, Nigeria E-mail: [email protected], [email protected] Abstract: The study of bivector spaces was first intiated by Vasantha Kandasamy in [1]. The objective of this paper is to present the concept of bicoset of a bivector space and obtain some of its elementary properties. Key Words: bigroup, bivector space, bicoset, bisum, direct bisum, inner biproduct space, biprojection. AMS(2000): 20Kxx, 20L05. §1. Introduction and Preliminaries The study of bialgebraic structures is a new development in the field of abstract algebra. Some of the bialgebraic structures already developed and studied and now available in several literature include: bigroups, bisemi-groups, biloops, bigroupoids, birings, binear-rings, bisemi- rings, biseminear-rings, bivector spaces and a host of others. Since the concept of bialgebraic structure is pivoted on the union of two non-empty subsets of a given algebraic structure for example a group, the usual problem arising from the union of two substructures of such an algebraic structure which generally do not form any algebraic structure has been resolved. With this new concept, several interesting algebraic properties could be obtained which are not present in the parent algebraic structure. In [1], Vasantha Kandasamy initiated the study of bivector spaces. Further studies on bivector spaces were presented by Vasantha Kandasamy and others in [2], [4] and [5]. In the present work however, we look at the bicoset of a bivector space and obtain some of its elementary properties.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • The Definition Definition Definition of the Determinant Determinant For
    The Definition ofofof thethethe Determinant For our discussion of the determinant I am going to be using a slightly different definition for the determinant than the author uses in the text. The reason I want to do this is because I think that this definition will give you a little more insight into how the determinant works and may make some of the proofs we have to do a little easier. I will also show you that my definition is equivalent to the author's definition. My definition starts with the concept of a permutation . A permutation of a list of numbers is a particular re-ordering of the list. For example, here are all of the permutations of the list {1,2,3}. 1,2,3 1,3,2 2,3,1 2,1,3 3,1,2 3,2,1 I will use the greek letter σ to stand for a particular permutation. If I want to talk about a particular number in that permutation I will use the notation σ(j) to stand for element j in the permutation σ. For example, if σ is the third permutation in the list above, σ(2) = 3. The sign of a permutation is defined as (-1) i(σ), where i(σ), the inversion count of σ, is defined as the number of cases where i < j while σ(i) > σ(j). Closely related to the inversion count is the swap count , s(σ), which counts the number of swaps needed to restore the permutation to the original list order.
    [Show full text]
  • Review a Basis of a Vector Space 1
    Review • Vectors v1 , , v p are linearly dependent if x1 v1 + x2 v2 + + x pv p = 0, and not all the coefficients are zero. • The columns of A are linearly independent each column of A contains a pivot. 1 1 − 1 • Are the vectors 1 , 2 , 1 independent? 1 3 3 1 1 − 1 1 1 − 1 1 1 − 1 1 2 1 0 1 2 0 1 2 1 3 3 0 2 4 0 0 0 So: no, they are dependent! (Coeff’s x3 = 1 , x2 = − 2, x1 = 3) • Any set of 11 vectors in R10 is linearly dependent. A basis of a vector space Definition 1. A set of vectors { v1 , , v p } in V is a basis of V if • V = span{ v1 , , v p} , and • the vectors v1 , , v p are linearly independent. In other words, { v1 , , vp } in V is a basis of V if and only if every vector w in V can be uniquely expressed as w = c1 v1 + + cpvp. 1 0 0 Example 2. Let e = 0 , e = 1 , e = 0 . 1 2 3 0 0 1 3 Show that { e 1 , e 2 , e 3} is a basis of R . It is called the standard basis. Solution. 3 • Clearly, span{ e 1 , e 2 , e 3} = R . • { e 1 , e 2 , e 3} are independent, because 1 0 0 0 1 0 0 0 1 has a pivot in each column. Definition 3. V is said to have dimension p if it has a basis consisting of p vectors. Armin Straub 1 [email protected] This definition makes sense because if V has a basis of p vectors, then every basis of V has p vectors.
    [Show full text]
  • Low-Level Image Processing with the Structure Multivector
    Low-Level Image Processing with the Structure Multivector Michael Felsberg Bericht Nr. 0202 Institut f¨ur Informatik und Praktische Mathematik der Christian-Albrechts-Universitat¨ zu Kiel Olshausenstr. 40 D – 24098 Kiel e-mail: [email protected] 12. Marz¨ 2002 Dieser Bericht enthalt¨ die Dissertation des Verfassers 1. Gutachter Prof. G. Sommer (Kiel) 2. Gutachter Prof. U. Heute (Kiel) 3. Gutachter Prof. J. J. Koenderink (Utrecht) Datum der mundlichen¨ Prufung:¨ 12.2.2002 To Regina ABSTRACT The present thesis deals with two-dimensional signal processing for computer vi- sion. The main topic is the development of a sophisticated generalization of the one-dimensional analytic signal to two dimensions. Motivated by the fundamental property of the latter, the invariance – equivariance constraint, and by its relation to complex analysis and potential theory, a two-dimensional approach is derived. This method is called the monogenic signal and it is based on the Riesz transform instead of the Hilbert transform. By means of this linear approach it is possible to estimate the local orientation and the local phase of signals which are projections of one-dimensional functions to two dimensions. For general two-dimensional signals, however, the monogenic signal has to be further extended, yielding the structure multivector. The latter approach combines the ideas of the structure tensor and the quaternionic analytic signal. A rich feature set can be extracted from the structure multivector, which contains measures for local amplitudes, the local anisotropy, the local orientation, and two local phases. Both, the monogenic signal and the struc- ture multivector are combined with an appropriate scale-space approach, resulting in generalized quadrature filters.
    [Show full text]