Minors and Cofactors Consider the Nth Order Determinant

Total Page:16

File Type:pdf, Size:1020Kb

Minors and Cofactors Consider the Nth Order Determinant Section 5: Linear Systems and Matrices Washkewicz College of Engineering Solution Methods – System of Linear Equations Earlier we saw that a generic system of n equations in n unknowns b1 a11 x1 a12 x2 a1n xn b2 a21 x1 a22 x2 a2n xn bn an1 x1 an2 x2 ann xn could be represented in the following matrix format The elements of the b1 a11 a12 a1n x1 square [A] matrix and b2 a21 a22 a2n x2 the {b} vector will be known and our goal is finding the elements of bn an1 an2 annxn the vector{x}. B AX 1 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Finding the elements of the {x} vector can be accomplished using approaches from an extensive library of methods that are quite diverse. All methods seek to solve a linear system of equations that can be expressed in a matrix format as Ax b for the vector {x}. If we could simply “divide” this expression by the matrix [A], i.e., x A1 b then we could easily formulate the vector {x}. As we will see this task is labor intensive. The methods used to accomplish this can be broadly grouped into the following two categories: 1. direct methods and 2. iterative methods Each group contains a number of methods and we will look at several in each category. Keep in mind that there are hybrid methods exist that are combinations of the two methods in the categories. 2 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Basic Definitions In scalar algebra we easily make use of the concept of zero and one as follows: 0 0 1 1 where is a scalar quantity. A scalar certainly possesses a reciprocal, or multiplicative inverse. that when applied to the scalar quantity produces one: 1 1 1 The above can be extended to n x n matrices. Here scalar one (1) becomes the identity matrix [I], and zero is the null matrix [0], i.e., A 0 0 A A A I IA A 3 Section 5: Linear Systems and Matrices Washkewicz College of Engineering At this point we note that if there is an n x n matrix [A]-1 that pre- and post-multiplies the matrix [A] such that A1 A A A1 I then the matrix [A]-1 is termed the inverse of the matrix [A] with respect to matrix multiplication. The matrix [A] is said to be invertible, or non-singular if [A]-1 exists, and non-invertible or singular if [A]-1 does not exist. The concept of matrix inversion is important in the study of structural analysis with matrix methods. We will study this topic in detail several times, and refer to it often throughout the course. We will formally define the inverse of a matrix though the use of the determinant of the matrix and its self –adjoint matrix. We will do that in a formal manner after revisiting properties of the determinants and co-factors of a matrix. However, there are a number of methods that enable one to find the solution without finding the inverse of the matrix. Probably the best known of these is Cramer's Rule followed by Gaussian elimination and the Gauss-Jordan method. 4 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Cramer’s Rule – Three Equations and Three Unknowns It is unfortunate that usually the method for the solution of linear equations that students remember from secondary education is Cramer's rule which is really an expansion by minors (topic discussed subsequently). This method is rather inefficient and relatively difficult to program. However, as it forms sort of a standard by which other methods can by judged, we will review it here for a system of three equations and three unknowns. The more general formulation is inductive. Consider the following system of three equations in terms of three unknowns {x1, x2, x3} b1 a11 a12 a13 x1 b a a a x 2 21 22 23 2 bn a31 a32 a33x3 Where we identify a11 a12 a13 A a a a 21 22 23 a a a 31 32 33 5 Section 5: Linear Systems and Matrices Washkewicz College of Engineering and b1 a12 a13 a11 b1 a13 a11 a12 b1 A b a a A a b a A a a b 1 2 22 23 2 21 2 23 2 21 22 2 b3 a32 a33 a31 b3 a33 a31 a32 b3 The solution is formulated as follows b1 a12 a13 a11 b1 a13 b2 a22 a23 a21 b2 a23 A A 1 b3 a32 a33 2 a31 b3 a33 x1 x2 A a11 a12 a13 A a11 a12 a13 a21 a22 a23 a21 a22 a23 a32 a32 a33 a32 a32 a33 6 Section 5: Linear Systems and Matrices Washkewicz College of Engineering and a11 a12 b1 a21 a22 b2 A 3 a31 a32 b3 x3 A a11 a12 a13 a21 a22 a23 a32 a32 a33 Proof follows from the solution of a system of two equations and two unknowns. For a system of n equations with n unknowns this solution method requires evaluating the determinant of the matrix [A] as well as augmented matrices (see above and previous page) where the jth column has been replaced by the elements of the vector {B}. Evaluation of the determinant of an n × n matrix requires about 3n2 operations and this must be repeated for each unknown. Thus solution by Cramer's rule will require at least 3n3 operations. 7 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Gaussian Elimination Let us consider a simpler algorithm, which forms the basis for one of the most reliable and stable direct methods for the solution of linear equations. It also provides a method for the inversion of matrices. Let begin by describing the method and then trying to understand why it works. Consider representing the set of linear equations a a a b 11 12 1n 1 a21 a22 a2n b2 an1 an2 ann bn Here we have suppressed the presence of the elements of the solution vector {x} and parentheses are used in lieu of brackets and braces so as not to infer matrix multiplication in this expression. We will refer to the above as an “augmented matrix.” 8 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Now we perform a series of operations on the rows and columns of the coefficient matrix [A] and we shall carry through the row operations to include the elements of the constant vector {B}. The rows are treated as if they were the equations so that anything done to one element is done to all. Start by dividing each row including the vector {B} by the lead element in the row – initially a11. The first row is then multiplied by an appropriate constant and subtracted from all the lower rows. Thus all rows but the first will have zero in the first column. That row should have a one (1) in the first column. This is repeated for each succeeding row. The second row is divided by the second element producing a one in the second column. This row is multiplied by appropriate constants and subtracted from the lower rows producing zeroes in the second column. This process is repeated until the following matrix is obtained 1 12 1n 1 0 1 23 2n 2 0 0 1 an1, n n1 0 0 0 1 n 9 Section 5: Linear Systems and Matrices Washkewicz College of Engineering When the diagonal coefficients are all unity, the last term of the vector {} contains the value of xn, i.e., xn n This can be used in the (n -1)th equation represented by the second to the last line to obtain xn-1 and so on right up to the first line which will yield the value of x1. n xi n ij x j ji1 10 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Gaussian-Jordan Elimination A simple modification to Gauss elimination method allows us to obtain the inverse to the matrix [A] as well as the solution vector {x}. Consider representing the set of linear equations as a a a b 1 0 0 11 12 1n 1 a21 a22 a2n b2 0 1 0 an1 an2 ann bn 0 0 1 Now the unit matrix [I] is included in the augmented matrix. The procedure is carried out as before, the Gauss elimination method producing zeros in the columns below and to the left of the diagonal element. However the same row operations are conducted on the unit matrix as well. At the end of the procedure we have both solved the system of equations and found the inverse of the original matrix. 11 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Example 5.1 12 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Example 5.2 13 Section 5: Linear Systems and Matrices Washkewicz College of Engineering The Determinant of a Square Matrix A square matrix of order n (an n x n matrix), i.e., a11 a12 a1n a a a A 21 22 2n an1 an2 ann possesses a uniquely defined scalar that is designated as the determinant of the matrix, or merely the determinant det A A Observe that only square matrices possess determinants. 14 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Vertical lines and not brackets designate a determinant, and while det[A] is a number and has no elements, it is customary to represent it as an array of elements of the matrix a11 a12 a1n a a a detA 21 22 2n an1 an2 ann A general procedure for finding the value of a determinant sometimes is called “expansion by minors.” We will discuss this method after going over some ground rules for operating with determinants.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • Lecture # 4 % System of Linear Equations (Cont.)
    Lecture # 4 - System of Linear Equations (cont.) In our last lecture, we were starting to apply the Gaussian Elimination Method to our macro model Y = C + 1500 C = 200 + 4 (Y T ) 5 1 T = 100 + 5 Y We de…ned the extended coe¢ cient matrix 1 1 0 1000 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 1 0 1 100 7 6 5 7 4 5 The objective is to use elementary row operations (ERO’s)to transform our system of linear equations into another, simpler system. – Eliminate coe¢ cient for …rst variable (column) from all equations (rows) except …rst equation (row). – Eliminate coe¢ cient for second variable (column) from all equations (rows) except second equation (rows). – Eliminate coe¢ cient for third variable (column) from all equations (rows) except third equation (rows). The objective is to get a system that looks like this: 1 0 0 s1 0 1 0 s2 2 3 0 0 1 s3 4 5 1 Let’suse our example 1 1 0 1500 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 1 0 1 100 7 6 5 7 4 5 Multiply …rst row (equation) by 1 and add it to third row 5 1 1 0 1500 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 0 1 1 400 7 6 5 7 4 5 Multiply …rst row by 4 and add it to row 2 5 1 1 0 1500 [A d] = 2 0 1 4 1400 3 5 5 6 7 6 7 6 0 1 1 400 7 6 5 7 4 5 Add row 2 to row 3 1 1 0 1500 [A d] = 2 0 1 4 1400 3 5 5 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 Multiply second row by 5 1 1 0 1500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 Add row 2 to row 1 1 0 4 8500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 2 Multiply row 3 by 5 9 1 0 4 8500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 1 1000
    [Show full text]
  • Determinants Linear Algebra MATH 2010
    Determinants Linear Algebra MATH 2010 • Determinants can be used for a lot of different applications. We will look at a few of these later. First of all, however, let's talk about how to compute a determinant. • Notation: The determinant of a square matrix A is denoted by jAj or det(A). • Determinant of a 2x2 matrix: Let A be a 2x2 matrix a b A = c d Then the determinant of A is given by jAj = ad − bc Recall, that this is the same formula used in calculating the inverse of A: 1 d −b 1 d −b A−1 = = ad − bc −c a jAj −c a if and only if ad − bc = jAj 6= 0. We will later see that if the determinant of any square matrix A 6= 0, then A is invertible or nonsingular. • Terminology: For larger matrices, we need to use cofactor expansion to find the determinant of A. First of all, let's define a few terms: { Minor: A minor, Mij, of the element aij is the determinant of the matrix obtained by deleting the ith row and jth column. ∗ Example: Let 2 −3 4 2 3 A = 4 6 3 1 5 4 −7 −8 Then to find M11, look at element a11 = −3. Delete the entire column and row that corre- sponds to a11 = −3, see the image below. Then M11 is the determinant of the remaining matrix, i.e., 3 1 M11 = = −8(3) − (−7)(1) = −17: −7 −8 ∗ Example: Similarly, to find M22 can be found looking at the element a22 and deleting the same row and column where this element is found, i.e., deleting the second row, second column: Then −3 2 M22 = = −8(−3) − 4(−2) = 16: 4 −8 { Cofactor: The cofactor, Cij is given by i+j Cij = (−1) Mij: Basically, the cofactor is either Mij or −Mij where the sign depends on the location of the element in the matrix.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Determinants
    Determinants §1. Prerequisites 1) Every row operation can be achieved by pre-multiplying (left-multiplying) by an invertible matrix E called the elementary matrix for that operation. The matrix E is obtained by applying the ¡ row operation to the appropriate identity matrix. The matrix E is also denoted by Aij c ¡ , Mi c , or Pij, respectively, for the operations add c times row j to i operation, multiply row i by c, and permute rows i and j, respectively. 2) The Row Reduction Theorem asserts that every matrix A can be row reduced to a unique row echelon reduced matrix R. In matrix form: There is a unique row reduced matrix R and some 1 £ £ £ ¤ ¢ ¢ ¢ ¢ ¢ elementary Ei with Ep ¢ E1A R, or equivalently, A F1 FpR where Fi Ei are also elemen- tary. 3) A matrix A determines a linear transformation: It takes vectors x and gives vectors Ax. §2. Restrictions All matrices must be square. Determinants are not defined for non-square matrices. §3. Motivation Determinants determine whether a matrix has an inverse. They give areas and play a crucial role in the change of variables formula in multivariable calculus. ¡ ¥ ¡ Let’s compute the area of the parallelogram determined by vectors a ¥ b and c d . See Figure 1. c a (a+c, b+d) b b (c, d) c d d c (a, b) b b (0, 0) a c Figure 1. 1 1 ¡ ¦ ¡ § ¡ § ¡ § £ ¦ ¦ ¦ § § § £ § The area is a ¦ c b d 2 2ab 2 2cd 2bc ab ad cb cd ab cd 2bc ad bc. Tentative definition: The determinant of a 2 by 2 matrix is ¨ a b a b £ § det £ ad bc © c d c d which is the “signed” area of the parallelogram with sides determine by the rows of the matrix.
    [Show full text]
  • 8 Rank of a Matrix
    8 Rank of a matrix We already know how to figure out that the collection (v1;:::; vk) is linearly dependent or not if each n vj 2 R . Recall that we need to form the matrix [v1 j ::: j vk] with the given vectors as columns and see whether the row echelon form of this matrix has any free variables. If these vectors linearly independent (this is an abuse of language, the correct phrase would be \if the collection composed of these vectors is linearly independent"), then, due to the theorems we proved, their span is a subspace of Rn of dimension k, and this collection is a basis of this subspace. Now, what if they are linearly dependent? Still, their span will be a subspace of Rn, but what is the dimension and what is a basis? Naively, I can answer this question by looking at these vectors one by one. In particular, if v1 =6 0 then I form B = (v1) (I know that one nonzero vector is linearly independent). Next, I add v2 to this collection. If the collection (v1; v2) is linearly dependent (which I know how to check), I drop v2 and take v3. If independent then I form B = (v1; v2) and add now third vector v3. This procedure will lead to the vectors that form a basis of the span of my original collection, and their number will be the dimension. Can I do it in a different, more efficient way? The answer if \yes." As a side result we'll get one of the most important facts of the basic linear algebra.
    [Show full text]
  • Laplace Expansion of the Determinant
    Geometria Lingotto. LeLing12: More on determinants. Contents: ¯ • Laplace expansion of the determinant. • Cross product and generalisations. • Rank and determinant: minors. • The characteristic polynomial. Recommended exercises: Geoling 14. ¯ Laplace expansion of the determinant The expansion of Laplace allows to reduce the computation of an n × n determinant to that of n (n − 1) × (n − 1) determinants. The formula, expanded with respect to the ith row (where A = (aij)), is: i+1 i+n det(A) = (−1) ai1det(Ai1) + ··· + (−1) aindet(Ain) where Aij is the (n − 1) × (n − 1) matrix obtained by erasing the row i and the column j from A. With respect to the j th column it is: j+1 j+n det(A) = (−1) a1jdet(A1j) + ··· + (−1) anjdet(Anj) Example 0.1. We do it with respect to the first row below. 1 2 1 4 1 3 1 3 4 3 4 1 = 1 − 2 + 1 = (4 − 6) − 2(3 − 5) + (3:6 − 5:4) = 0 6 1 5 1 5 6 5 6 1 The proof of the expansion along the first row is as follows. The determinant's linearity, proved in the previous set of notes, implies 0 1 Ej n BA C X B 2C det(A) = a1j det B . C j=1 @ . A An Ingegneria dell'Autoveicolo, LeLing12 1 Geometria Geometria Lingotto. where Ej is the canonical basis of the rows, i.e. Ej is zero except at position j where there is 1. Thus we have to calculate the determinants 0 0 ··· 0 1 0 0 ··· 0 a a ··· a a a ······ a 21 22 2(j−1) 2j 2(j+1) 2n .
    [Show full text]
  • The Geometry of Algorithms with Orthogonality Constraints∗
    SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 20, No. 2, pp. 303–353 " THE GEOMETRY OF ALGORITHMS WITH ORTHOGONALITY CONSTRAINTS∗ ALAN EDELMAN† , TOMAS´ A. ARIAS‡ , AND STEVEN T. SMITH§ Abstract. In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper. Key words. conjugate gradient, Newton’s method, orthogonality constraints, Grassmann man- ifold, Stiefel manifold, eigenvalues and eigenvectors, invariant subspace, Rayleigh quotient iteration, eigenvalue optimization, sequential quadratic programming, reduced gradient method, electronic structures computation, subspace tracking AMS subject classifications. 49M07, 49M15, 53B20, 65F15, 15A18, 51F20, 81V55 PII. S0895479895290954 1. Introduction. Problems on the Stiefel and Grassmann manifolds arise with sufficient frequency that a unifying investigation of algorithms designed to solve these problems is warranted. Understanding these manifolds, which represent orthogonality constraints (as in the symmetric eigenvalue problem), yields penetrating insight into many numerical algorithms and unifies seemingly unrelated ideas from different areas. The optimization community has long recognized that linear and quadratic con- straints have special structure that can be exploited.
    [Show full text]
  • Getting Started with MATLAB
    MATLAB® The Language of Technical Computing Computation Visualization Programming Getting Started with MATLAB Version 6 How to Contact The MathWorks: www.mathworks.com Web comp.soft-sys.matlab Newsgroup [email protected] Technical support [email protected] Product enhancement suggestions [email protected] Bug reports [email protected] Documentation error reports [email protected] Order status, license renewals, passcodes [email protected] Sales, pricing, and general information 508-647-7000 Phone 508-647-7001 Fax The MathWorks, Inc. Mail 3 Apple Hill Drive Natick, MA 01760-2098 For contact information about worldwide offices, see the MathWorks Web site. Getting Started with MATLAB COPYRIGHT 1984 - 2001 by The MathWorks, Inc. The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or repro- duced in any form without prior written consent from The MathWorks, Inc. FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by or for the federal government of the United States. By accepting delivery of the Program, the government hereby agrees that this software qualifies as "commercial" computer software within the meaning of FAR Part 12.212, DFARS Part 227.7202-1, DFARS Part 227.7202-3, DFARS Part 252.227-7013, and DFARS Part 252.227-7014. The terms and conditions of The MathWorks, Inc. Software License Agreement shall pertain to the government’s use and disclosure of the Program and Documentation, and shall supersede any conflicting contractual terms or conditions.
    [Show full text]
  • Mathematical Methods – WS 2021/22 5– Determinant – 1 / 29 Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 2 / 29 Properties of a Volume Determinant
    What is a Determinant? We want to “compute” whether n vectors in Rn are linearly dependent and measure “how far” they are from being linearly dependent, resp. Idea: Chapter 5 Two vectors in R2 span a parallelogram: Determinant vectors are linearly dependent area is zero ⇔ We use the n-dimensional volume of the created parallelepiped for our function that “measures” linear dependency. Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 1 / 29 Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 2 / 29 Properties of a Volume Determinant We define our function indirectly by the properties of this volume. The determinant is a function which maps an n n matrix × A = (a ,..., a ) into a real number det(A) with the following I Multiplication of a vector by a scalar α yields the α-fold volume. 1 n properties: I Adding some vector to another one does not change the volume. (D1) The determinant is linear in each column: I If two vectors coincide, then the volume is zero. det(..., ai + bi,...) = det(..., ai,...) + det(..., bi,...) I The volume of a unit cube is one. det(..., α ai,...) = α det(..., ai,...) (D2) The determinant is zero, if two columns coincide: det(..., ai,..., ai,...) = 0 (D3) The determinant is normalized: det(I) = 1 Notations: det(A) = A | | Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 3 / 29 Josef Leydold – Mathematical Methods – WS 2021/22 5– Determinant – 4 / 29 Example – Properties Determinant – Remarks (D1) I Properties (D1)–(D3) define a function uniquely. 1 2 + 10 3 1 2 3 1 10 3 (I.e., such a function does exist and two functions with these properties are identical.) 4 5 + 11 6 = 4 5 6 + 4 11 6 7 8 + 12 9 7 8 9 7 12 9 I The determinant as defined above can be negative.
    [Show full text]
  • Contents 3 Inner Product Spaces
    Linear Algebra (part 3) : Inner Product Spaces (by Evan Dummit, 2020, v. 2.00) Contents 3 Inner Product Spaces 1 3.1 Inner Product Spaces . 1 3.1.1 Inner Products on Real Vector Spaces . 1 3.1.2 Inner Products on Complex Vector Spaces . 3 3.1.3 Properties of Inner Products, Norms . 4 3.2 Orthogonality . 6 3.2.1 Orthogonality, Orthonormal Bases, and the Gram-Schmidt Procedure . 7 3.2.2 Orthogonal Complements and Orthogonal Projection . 9 3.3 Applications of Inner Products . 13 3.3.1 Least-Squares Estimates . 13 3.3.2 Fourier Series . 16 3.4 Linear Transformations and Inner Products . 18 3.4.1 Characterizations of Inner Products . 18 3.4.2 The Adjoint of a Linear Transformation . 20 3 Inner Product Spaces In this chapter we will study vector spaces having an additional kind of structure called an inner product, which generalizes the idea of the dot product of vectors in Rn, and which will allow us to formulate notions of length and angle in more general vector spaces. We dene inner products in real and complex vector spaces and establish some of their properties, including the celebrated Cauchy-Schwarz inequality, and survey some applications. We then discuss orthogonality of vectors and subspaces and in particular describe a method for constructing an orthonormal basis for any nite-dimensional inner product space, which provides an analogue of giving standard unit coordinate axes in Rn. Next, we discuss a pair of very important practical applications of inner products and orthogonality: computing least-squares approximations and approximating periodic functions with Fourier series.
    [Show full text]
  • The $ K $-Th Derivatives of the Immanant and the $\Chi $-Symmetric Power of an Operator
    The k-th derivatives of the immanant and the χ-symmetric power of an operator S´onia Carvalho∗ Pedro J. Freitas† February, 2013 Abstract In recent papers, R. Bhatia, T. Jain and P. Grover obtained for- mulas for directional derivatives, of all orders, of the determinant, the permanent, the m-th compound map and the m-th induced power map. In this paper we generalize these results for immanants and for other symmetric powers of a matrix. 1 Introduction There is a formula for the derivative of the determinant map on the space of the square matrices of order n, known as the Jacobi formula, which has been well known for a long time. In recent work, T. Jain and R. Bhatia derived formulas for higher order derivatives of the determinant ([7]) and T. Jain also had derived formulas for all the orders of derivatives for the map m arXiv:1305.1143v1 [math.AC] 6 May 2013 ∧ that takes an n n matrix to its m-th compound ([10]). Later, P. Grover, in the same spirit× of Jain’s work, did the same for the permanent map and the for the map m that takes an n n matrix to its m-th induced power. ∨ × The mentioned authors extended the theory in [6]. ∗Centro de Estruturas Lineares e Combinat´oria da Universidade de Lisboa, Av Prof Gama Pinto 2, P-1649-003 Lisboa and Departamento de Matem´atica do ISEL, Rua Con- selheiro Em´ıdio Navarro 1, 1959-007 Lisbon, Portugal ([email protected]). †Centro de Estruturas Lineares e Combinat´oria, Av Prof Gama Pinto 2, P-1649-003 Lis- boa and Departamento de Matem´atica da Faculdade de Ciˆencias, Campo Grande, Edif´ıcio C6, piso 2, P-1749-016 Lisboa.
    [Show full text]