Part 1: Linear Algebra 1 Chapter 1

Total Page:16

File Type:pdf, Size:1020Kb

Part 1: Linear Algebra 1 Chapter 1 Part 1: Linear Algebra 1 Chapter 1 Vector in Rn. Addition and scalar multiplication. Inner product, norm • (length or magnitude) of a vector, angle between two vectors u v cos θ = · u v k kk k orthogonality and Pythagorean theorem. Cauchy-Schwarz inequality: • u v u v . | · | ≤k kk k The textbook provides one proof in page 17 (1.14). Here we have another proof. 1. Step 1: If v = 0, the inequality is true (equal of the two sides). 2. Step 2: Introducing u v w = u · v, − v 2 k k it can be proved that w is orthogonal to v, u v u = w + · v. v 2 k k The RHS are two orthogonal vectors, therefore u v 2 u v 2 u 2= w 2 +| · | | · | . k k k k v 2 ≥ v 2 k k k k 3. Step 3: Multiply the inequality by v 2 and take square root, we have the Cauchy-Schwarz inequalityk k u v u v . k kk k≥ | · | Cross (vector product) u v, parallel vectors (n = 2, 3). • × 1 2 Chapter 1 Matrix, row and column vectors, transpose, addition, scalar multipli- • cation. l m m n Matrix multiplication, A R × ,B R × , the resulting matrix • l n ∈ ∈ is C = AB R × . Arithmetic rules of matrix multiplication cij = m ∈ k aikbkj. n n 1 SquareP matrix A R × , identity matrix, inverse of a matrix AA− = • I. ∈ Symmetric matrix aij = aji, anti-symmetric (skew-symmetric) matrix • T T T aij = aji, orthogonal matrix AA = I, normal matrix AA = A A. Symmetric,− skew-symmetric, and orthogonal matrices are normal, but other matrices could also be normal. H With complex matrix: conjugate transpose A , aij aji∗ . Hermitian • H H H → 1 H A = A, skew-hermitian A = A, unitary A = A− or A A = I and normal matrix AAH = AH A.− Other special matrices: diagonal, upper and lower triangular matrices. • n n 1 Vector as a special matrix v R v R × . Matrix and vector • multiplication A Rm n and∈ v R→n, Av∈ = w Rm. ∈ × ∈ ∈ Example: A + AH is Hermitian, A AH is skew-Hermitian, A can • be made as a summation of B + C,− where B is Hermitian and C is skew-Hermitian. 3 Chapter 3 Homogeneous and non-homogeneous equations. • Degenerate equation, consistency and uniqueness. • Echelon form, pivot variables and free variables. • Elementary operations: • 1. E1–Interchange two equations. 2 2. E2–Multiply an equation by a constant. 3. E3–Add (or subtract) another equation. Gauss elimination. Example: • x 3x 2x = 6 1 − 2 − 3 2x 4x 3x = 8 1 − 2 − 3 3x + 6x + 8x = 5 − 1 2 3 − (1) Echelon matrix satisfies the following properties • (1). All zero rows are at the bottom of the matrix; (2). Each leading nonzero entry in a row is to the right of the leading nonzero entry in the preceding row; Row canonical form of the matrix has two additional properties (3). Each pivot is equal to 1; (4). Each pivot is the only nonzero entry in its column. Row equivalence, two matrices A and B are row equivalent if the two • matrices can become each other by a set of elementary operations. Rank is the number of pivots in the echelon form. • Gauss elimination in matrix form. Example • 1 1 2 4 5 1 1 0 10 9 − − − [A, b] = 2 2 3 1 3 0 0 1 7 7 3 3 −4 2 1 ∼ 0 0 0− 0− 0 − − Existence: for M = [A, b] and A, the solution exist if and only if • rank(M) is the same as rank(A). uniqueness: the solution is unique if and only if the rank of A is n, • where n is the number of unknowns. Square system of linear equations. • 3 Inverse of coefficient matrix A. A square system of equations Ax = b • has a unique solution iff the coefficient matrix A is invertible. In this 1 case, A− b is the unique solution to the equation Ax = b. Proof: • 1 1 1. Sufficiency: if A− exists, let x = A− b, we have 1 1 A(A− b) = (AA− )b = Ib = b satisfies the equation. 2. Neccessity: If v is a solution and Av = b, then 1 1 1 v = Iv = (A− A)v = A− (Av) = A− b. The concept of linear combination. Interpretation of Ax = b: b is a • linear combination of column vectors of the matrix A. Homogeneous system of equations. Homogeneous equation always has • the trivial solution. When will homogeneous equation has non-trivial solution? If u and v are solutions of the homogeneous equation, the linear com- • bination of them is also the solution. Dimension and basis of solution to a homogeneous equation. • The non-zero solution of homogeneous equation. Example: • x + 2x 3x + 2x 4x = 0 1 2 − 3 4 − 5 x 3x + 2x = 0 3 − 4 5 2x 6x + 4x = 0 3 − 4 5 to become x + 2x 3x + 2x 4x = 0 1 2 − 3 4 − 5 x 3x + 2x = 0 3 − 4 5 Free variables are x2, x4, x5. Let one of them equal to one and others zero, find the three basis. 4 Corresponding non-homogeneous equation, if w is a particular solution • to the non-homogeneous equation, u + w is also a solution, where u is the solution of the corresponding homogeneous equation. Elementary matrix, elementary operations can be represented by mul- • tiplication of an elementary matrix. Non-singular matrix is a product of elementary matrices. Example of 3 3 elementary matrices. • × 1. Exchange two rows (second and third): 1 0 0 E = 0 0 1 0 1 0 2. Multiplying a (second) row by a constant k: 1 0 0 E = 0 k 0 0 0 1 3. Add a row to another (add second to third): 1 0 0 E = 0 1 0 0 1 1 Finding inverse of a square matrix A. • E E E A = I,E E E IA = I, 1 2 ··· n 1 2 ··· n therefore E E E I is the inverse of A. Example: 1 2 ··· n 1 0 2 A = 2 1 3 4− 1 8 Counting # of operations in Gauss elimination method. • 5 Lower and upper triangular matrices. The inverse of a lower triangular • matrix is another lower triangular matrix. The inverse of an upper triangular matrix is another upper triangular matrix. Gauss elimination can be expressed by the multiplication of a series of • ”atomic” lower triangular matrix. Its inverse gives the LU decomposi- tion. Doolittle algorithm. • (n) (n) (n 1) 1. Define A = L A − 2. Let 1 0 ... (n 1) − (n) 1 ai,n L = . , li,n = , i = n+1, , N. .. − (n 1) ··· ln+1,n an,n− . . .. 0 l 1 N,n 3. (N 1) (N 1) (1) U = A − = L − L A ··· 4 Chapter 4 Vector space, definitions: V is a vector space if it satisfies two conditions • 1. If u, v V , then u + v V . ∈ ∈ 2. If u V , then ku V , where k is a scalar. ∈ ∈ Axioms of vector space: 1. (u + v) + w = u + (v + w). 2. There exists 0, where u+0 = u; there exists u, where u+( u) = 0. − − 3. 1u = u, Examples of vector spaces: vector, matrix, polynomial, real functions. • 6 Subspace of a space: a subset and a space. • 1. An m-th degree polynomial is a subspace of an n-th degree poly- nomial if m < n. 2. Polynomial functions form a subspace of the real function space. 3. Vector subspace. Linear combinations, spanning sets. If V is a vector space, a set of • vectors u1, u2, , un forms a spanning set of V if ALL vectors in V can be made by··· a linear combination of the set. Linear dependence and independence. Given a set of vectors u1, u2, , um, • if a vector v can be made as a linear combination of the set, v is linearly··· dependent on the set. Otherwise v is linearly independent of the set. Basis and dimension. A minimum set of vectors which spans a space • V is called the basis of the space. The minimum number of vectors to span a space V is called the dimension of the space. Row space and rank of a matrix. Elementary operations will not change • the row space. Row equivalent matrices have the same row space. In echelon and row canonical forms, row vectors are linearly independent. They form the basis of the row space. The number of row in the two forms is called the rank of the matrix. An n n square matrix with rank n. • × Example: find the basis for W = span(u , u , , u ). • 1 2 ··· r 1. Form matrix using vectors as rows. 2. Reduce to the echelon form. 3. Output nonzero rows as the basis. The column rank is the same as the row rank. Column vectors form a • column space. Interpretation of Ax = b: x1a1 + x2a2 + xnan = b is a linear combination of column vectors of A. b must be··· in the column space, or there is no solution. 7 Example: Determine if a given vector w is a linear combination of n • vectors U = (u1, u2, un), and find the coefficient of linear combina- tion. This is to solve··· x u + x u + x u = w or Ux = w, 1 1 2 2 ··· n n where U is a matrix using u , u , , u as column vectors. 1 2 ··· n Polynomial as a vector. A polynomial is a vector whose components • are its coefficients. Homogeneous equation Ax = 0. The solutions of the equation form • a vector space, called the null space of A.
Recommended publications
  • 18.06 Linear Algebra, Problem Set 5 Solutions
    18.06 Problem Set 5 Solution Total: points Section 4.1. Problem 7. Every system with no solution is like the one in problem 6. There are numbers y1; : : : ; ym that multiply the m equations so they add up to 0 = 1. This is called Fredholm’s Alternative: T Exactly one of these problems has a solution: Ax = b OR A y = 0 with T y b = 1. T If b is not in the column space of A it is not orthogonal to the nullspace of A . Multiply the equations x1 − x2 = 1 and x2 − x3 = 1 and x1 − x3 = 1 by numbers y1; y2; y3 chosen so that the equations add up to 0 = 1. Solution (4 points) Let y1 = 1, y2 = 1 and y3 = −1. Then the left-hand side of the sum of the equations is (x1 − x2) + (x2 − x3) − (x1 − x3) = x1 − x2 + x2 − x3 + x3 − x1 = 0 and the right-hand side is 1 + 1 − 1 = 1: Problem 9. If AT Ax = 0 then Ax = 0. Reason: Ax is inthe nullspace of AT and also in the of A and those spaces are . Conclusion: AT A has the same nullspace as A. This key fact is repeated in the next section. Solution (4 points) Ax is in the nullspace of AT and also in the column space of A and those spaces are orthogonal. Problem 31. The command N=null(A) will produce a basis for the nullspace of A. Then the command B=null(N') will produce a basis for the of A.
    [Show full text]
  • Implementation of Gaussian- Elimination
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software.
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Math 204, Spring 2020 About the First Test
    Math 204, Spring 2020 About the First Test The first test for this course will be given in class on Friday, February 21. It covers all of the material that we have done in Chapters 1 and 2, up to Chapter 2, Section II. For the test, you should know and understand all the definitions and theorems that we have covered. You should be able to work with matrices and systems of linear equations. The test will include some \short essay" questions that ask you to define something, or discuss something, or explain something, and so on. Other than that, you can expect most of the questions to be similar to problems that have been given on the homework. You can expect to do a few proofs, but they will be fairly straightforward. Here are some terms and ideas that you should be familiar with for the test: systems of linear equations solution set of a linear system of equations Gaussian elimination the three row operations notations for row operations: kρi + ρj, ρi $ ρj, kρj row operations are reversible applying row operations to a system of equations echelon form for a system of linear equations matrix; rows and columns of a matrix; m × n matrix representing a system of linear equations as an augmented matrix row operations and echelon form for matrices applying row operations to a matrix n vectors in R ; column vectors and row vectors n vector addition and scalar multiplication for vectors in R n linear combination of vectors in R expressing the solution set of a linear system in vector form homogeneous system of linear equations associated homogeneous
    [Show full text]
  • Math 102 -- Linear Algebra I -- Study Guide
    Math 102 Linear Algebra I Stefan Martynkiw These notes are adapted from lecture notes taught by Dr.Alan Thompson and from “Elementary Linear Algebra: 10th Edition” :Howard Anton. Picture above sourced from (http://i.imgur.com/RgmnA.gif) 1/52 Table of Contents Chapter 3 – Euclidean Vector Spaces.........................................................................................................7 3.1 – Vectors in 2-space, 3-space, and n-space......................................................................................7 Theorem 3.1.1 – Algebraic Vector Operations without components...........................................7 Theorem 3.1.2 .............................................................................................................................7 3.2 – Norm, Dot Product, and Distance................................................................................................7 Definition 1 – Norm of a Vector..................................................................................................7 Definition 2 – Distance in Rn......................................................................................................7 Dot Product.......................................................................................................................................8 Definition 3 – Dot Product...........................................................................................................8 Definition 4 – Dot Product, Component by component..............................................................8
    [Show full text]
  • Linear Algebra Review
    Linear Algebra Review Kaiyu Zheng October 2017 Linear algebra is fundamental for many areas in computer science. This document aims at providing a reference (mostly for myself) when I need to remember some concepts or examples. Instead of a collection of facts as the Matrix Cookbook, this document is more gentle like a tutorial. Most of the content come from my notes while taking the undergraduate linear algebra course (Math 308) at the University of Washington. Contents on more advanced topics are collected from reading different sources on the Internet. Contents 3.8 Exponential and 7 Special Matrices 19 Logarithm...... 11 7.1 Block Matrix.... 19 1 Linear System of Equa- 3.9 Conversion Be- 7.2 Orthogonal..... 20 tions2 tween Matrix Nota- 7.3 Diagonal....... 20 tion and Summation 12 7.4 Diagonalizable... 20 2 Vectors3 7.5 Symmetric...... 21 2.1 Linear independence5 4 Vector Spaces 13 7.6 Positive-Definite.. 21 2.2 Linear dependence.5 4.1 Determinant..... 13 7.7 Singular Value De- 2.3 Linear transforma- 4.2 Kernel........ 15 composition..... 22 tion.........5 4.3 Basis......... 15 7.8 Similar........ 22 7.9 Jordan Normal Form 23 4.4 Change of Basis... 16 3 Matrix Algebra6 7.10 Hermitian...... 23 4.5 Dimension, Row & 7.11 Discrete Fourier 3.1 Addition.......6 Column Space, and Transform...... 24 3.2 Scalar Multiplication6 Rank......... 17 3.3 Matrix Multiplication6 8 Matrix Calculus 24 3.4 Transpose......8 5 Eigen 17 8.1 Differentiation... 24 3.4.1 Conjugate 5.1 Multiplicity of 8.2 Jacobian......
    [Show full text]
  • Lecture 11: Graphs and Their Adjacency Matrices
    Lecture 11: Graphs and their Adjacency Matrices Vidit Nanda The class should, by now, be relatively comfortable with Gaussian elimination. We have also successfully extracted bases for null and column spaces using Reduced Row Echelon Form (RREF). Since these spaces are defined for the transpose of a matrix as well, we have four fundamental subspaces associated to each matrix. Today we will see an interpretation of all four towards understanding graphs and networks. 1. Recap n m Here are the four fundamental subspaces associated to each m × n matrix A : R ! R : n (1) The null space N(A) is the subspace of R sent to the zero vector by A, m (2) The column space C(A) is the subspace of R produced by taking all linear combinations of columns of A, T n (3) The row space C(A ) is the subspace of R consisting of linear combinations of rows of A, or the columns of its transpose AT , and finally, T m T (4) The left nullspace N(A ) is the subspace of R consisting of all vectors which A sends to the zero vector. By the Fundamental Theorem of Linear Algebra from Lecture 9 it turns out that knowing the dimension of one of these spaces immediately tells us about the dimensions of the other three. So, if the rank { or dim C(A) { is some number r, then immediately we know: • dim N(A) = n - r, • dim C(AT ) = r, and • dim N(AT ) = m - r. And more: we can actually extract bases for each of these subspaces by using the RREF as seen in Lecture 10.
    [Show full text]
  • Linear Algebra I: Vector Spaces A
    Linear Algebra I: Vector Spaces A 1 Vector spaces and subspaces 1.1 Let F be a field (in this book, it will always be either the field of reals R or the field of complex numbers C). A vector space V D .V; C; o;˛./.˛2 F// over F is a set V with a binary operation C, a constant o and a collection of unary operations (i.e. maps) ˛ W V ! V labelled by the elements of F, satisfying (V1) .x C y/ C z D x C .y C z/, (V2) x C y D y C x, (V3) 0 x D o, (V4) ˛ .ˇ x/ D .˛ˇ/ x, (V5) 1 x D x, (V6) .˛ C ˇ/ x D ˛ x C ˇ x,and (V7) ˛ .x C y/ D ˛ x C ˛ y. Here, we write ˛ x and we will write also ˛x for the result ˛.x/ of the unary operation ˛ in x. Often, one uses the expression “multiplication of x by ˛”; but it is useful to keep in mind that what we really have is a collection of unary operations (see also 5.1 below). The elements of a vector space are often referred to as vectors. In contrast, the elements of the field F are then often referred to as scalars. In view of this, it is useful to reflect for a moment on the true meaning of the axioms (equalities) above. For instance, (V4), often referred to as the “associative law” in fact states that the composition of the functions V ! V labelled by ˇ; ˛ is labelled by the product ˛ˇ in F, the “distributive law” (V6) states that the (pointwise) sum of the mappings labelled by ˛ and ˇ is labelled by the sum ˛ C ˇ in F, and (V7) states that each of the maps ˛ preserves the sum C.
    [Show full text]
  • 3.6 Dimensions of the Four Subspaces
    184 Chapter 3. Vector Spaces and Subspaces 3.6 Dimensions of the Four Subspaces The main theorem in this chapter connects rank and dimension. The rank of a matrix is the number of pivots. The dimension of a subspace is the number of vectors in a basis. We count pivots or we count basis vectors. The rank of A reveals the dimensions of all four fundamental subspaces. Here are the subspaces, including the new one. Two subspaces come directly from A, and the other two from AT: Four Fundamental Subspaces 1. The row space is C .AT/, a subspace of Rn. 2. The column space is C .A/, a subspace of Rm. 3. The nullspace is N .A/, a subspace of Rn. 4. The left nullspace is N .AT/, a subspace of Rm. This is our new space. In this book the column space and nullspace came first. We know C .A/ and N .A/ pretty well. Now the other two subspaces come forward. The row space contains all combinations of the rows. This is the column space of AT. For the left nullspace we solve ATy D 0—that system is n by m. This is the nullspace of AT. The vectors y go on the left side of A when the equation is written as y TA D 0T. The matrices A and AT are usually different. So are their column spaces and their nullspaces. But those spaces are connected in an absolutely beautiful way. Part 1 of the Fundamental Theorem finds the dimensions of the four subspaces.
    [Show full text]
  • Problem Set 5
    MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set 5 Problems marked (T) are for discussions in Tutorial sessions. 4 ? 1. Let S = fe1 + e4; −e1 + 3e2 − e3g ⊂ R . Find S . ? Solution: (e1 +e4) is the set of all vectors that are orthogonal to e1 +e4. That is, the set of 1 0 0 1 0 all xT = (x ; : : : ; x ) such that x +x = 0. So S? is the solution space of . 1 4 1 4 −1 3 −1 0 0 Apply GJE and get it. Otherwise apply GS with fe1 + e4; −e1 + 3e2 − e3; e1; e2; e3; e4g. Linear span of the last two vectors of the orthonormal basis is S?. 2 2. Show that there are infinitely many orthonormal bases of R . cos θ − sin θ Solution: Columns of , for 0 ≤ θ < 2π, form bases of 2. Idea is that take sin θ cos θ R fe1; e2g and then counter-clockwise rotate the set by an angle θ. 3. (T) What is the projection of v = e1+2e2−3e3 on H := f(x1; x2; x3; x4): x1+2x2+4x4 = 0g? 8203 2 23 2 439 <> 0 −1 0 => Solution: Basis for H: 6 7; 6 7; 6 7 . 415 4 05 4 05 > > : 0 0 −1 ; 8 203 2 23 2 439 <> 0 −1 8 => Orthonormalize: w = 6 7; w = p1 6 7; w = p1 6 7 : 1 415 2 5 4 05 3 105 4 05 > > : 0 0 −5 ; 2 03 2 43 2 163 0 8 32 The projection is hv; w iw + hv; w iw + hv; w iw = 6 7 + 0w + 20 6 7 = 1 6 7.
    [Show full text]
  • The Rank of Recurrence Matrices Christopher Lee University of Portland, [email protected]
    University of Portland Pilot Scholars Mathematics Faculty Publications and Presentations Mathematics 5-2014 The Rank of Recurrence Matrices Christopher Lee University of Portland, [email protected] Valerie Peterson University of Portland, [email protected] Follow this and additional works at: http://pilotscholars.up.edu/mth_facpubs Part of the Applied Mathematics Commons, and the Mathematics Commons Citation: Pilot Scholars Version (Modified MLA Style) Lee, Christopher and Peterson, Valerie, "The Rank of Recurrence Matrices" (2014). Mathematics Faculty Publications and Presentations. 9. http://pilotscholars.up.edu/mth_facpubs/9 This Journal Article is brought to you for free and open access by the Mathematics at Pilot Scholars. It has been accepted for inclusion in Mathematics Faculty Publications and Presentations by an authorized administrator of Pilot Scholars. For more information, please contact [email protected]. The Rank of Recurrence Matrices Christopher Lee and Valerie Peterson Christopher Lee ([email protected]), a Wyoming native, earned his Ph.D. from the University of Illinois in 2009; he is currently a Visiting Assistant Professor at the University of Portland. His primary field of research lies in differential topology and geometry, but he has interests in a variety of disciplines, including linear algebra and the mathematics of physics. When not teaching or learning math, Chris enjoys playing hockey, dabbling in cooking, and resisting the tendency for gravity to anchor heavy things to the ground. Valerie Peterson ([email protected]) received degrees in mathematics from Santa Clara University (B.S.) and the University of Illinois at Urbana-Champaign (Ph.D.) before joining the faculty at the University of Portland in 2009.
    [Show full text]